Updates from: 09/30/2023 01:13:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 09/01/2023 Last updated : 09/29/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## September 2023
+
+This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+
+### Updated articles
+
+- [Supported Microsoft Entra features](supported-azure-ad-features.md) - Editorial updates
+- [Publish your Azure Active Directory B2C app to the Microsoft Entra app gallery](publish-app-to-azure-ad-app-gallery.md) - Editorial updates
+- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - Editorial updates
+- [Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml) - Editorial updates
+- [Define an ID token hint technical profile in an Azure Active Directory B2C custom policy](id-token-hint.md) - Editorial updates
+- [Set up sign-in for multi-tenant Microsoft Entra ID using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md) - Editorial updates
+- [Set up sign-in for a specific Microsoft Entra organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md) - Editorial updates
+- [Localization string IDs](localization-string-ids.md) - Editorial updates
+- [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates
+- [Define a Microsoft Entra SSPR technical profile in an Azure AD B2C custom policy](aad-sspr-technical-profile.md) - Editorial updates
+- [Define a Microsoft Entra technical profile in an Azure Active Directory B2C custom policy](active-directory-technical-profile.md) - Editorial updates
+- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md) - Editorial updates
+- [Billing model for Azure Active Directory B2C](billing.md) - Editorial updates
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md) - Editorial updates
+- [Set up a sign-up and sign-in flow with a social account by using Azure Active Directory B2C custom policy](custom-policies-series-sign-up-or-sign-in-federation.md) - Editorial updates
+- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates
+ ## August 2023 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
### Updated articles - [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md) - [Azure AD B2C] Azure AD B2C Go-Local opt-in feature-- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](configure-security-analytics-sentinel.md) - Removing product name from filename and links. -- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-web-application-firewall.md) - Removing product name from filename and links. -- [Title not found in: #240919](./external-identities-videos.md) - Delete azure-ad-external-identities-videos.md-- [Build a global identity solution with funnel-based approach](b2c-global-identity-funnel-based-design.md) - Removing product name from filename and links.-- [Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration](b2c-global-identity-proof-of-concept-funnel.md) - Removing product name from filename and links. -- [Azure Active Directory B2C global identity framework proof of concept for region-based configuration](b2c-global-identity-proof-of-concept-regional.md) - Removing product name from filename and links.-- [Build a global identity solution with region-based approach](b2c-global-identity-region-based-design.md) - Removing product name from filename and links. -- [Azure Active Directory B2C global identity framework](b2c-global-identity-solutions.md) - Removing product name from filename and links. -- [Azure Active Directory B2C: What's new](whats-new-docs.md) - [Azure AD B2C] What is new May 2023
+- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](configure-security-analytics-sentinel.md) - Removing product name from filename and links
+- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-web-application-firewall.md) - Removing product name from filename and links
+- [Build a global identity solution with funnel-based approach](b2c-global-identity-funnel-based-design.md) - Removing product name from filename and links
+- [Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration](b2c-global-identity-proof-of-concept-funnel.md) - Removing product name from filename and links
+- [Azure Active Directory B2C global identity framework proof of concept for region-based configuration](b2c-global-identity-proof-of-concept-regional.md) - Removing product name from filename and links
+- [Build a global identity solution with region-based approach](b2c-global-identity-region-based-design.md) - Removing product name from filename and links
+- [Azure Active Directory B2C global identity framework](b2c-global-identity-solutions.md) - Removing product name from filename and links
- [Use the Azure portal to create and delete consumer users in Azure AD B2C](manage-users-portal.md) - [Azure AD B2C] Revoke user's session - [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md) - Added steps to disable Azure monitor
-## May 2023
-
-### New articles
--- [How to secure your Azure Active Directory B2C identity solution](security-architecture.md)-
-### Updated articles
--- [Configure Azure Active Directory B2C with Akamai Web Application Protector](partner-akamai.md)-- [Configure Asignio with Azure Active Directory B2C for multifactor authentication](partner-asignio.md)-- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)-- [Configure WhoIAM Rampart with Azure Active Directory B2C](partner-whoiam-rampart.md)-- [Build a global identity solution with funnel-based approach](./b2c-global-identity-funnel-based-design.md)-- [Use the Azure portal to create and delete consumer users in Azure AD B2C](manage-users-portal.md)
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 09/15/2023 Last updated : 09/29/2023 -+
The following table lists each setting that can be set to Microsoft managed and
| Setting | Configuration | |-||
-| [Registration campaign](how-to-mfa-registration-campaign.md) | From Sept 25 to Oct 20, 2023, the Microsoft managed value for the registration campaign will change to Enabled for text message and voice call users across all tenants. |
+| [Registration campaign](how-to-mfa-registration-campaign.md) | From Sept. 25 to Oct. 20, 2023, the Microsoft managed value for the registration campaign will change to Enabled for text message and voice call users across all tenants. |
| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled |
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
Previously updated : 09/15/2023 Last updated : 09/24/2023
Only the [converged registration experience](concept-registration-mfa-sspr-combi
Two other policies, located in **Multifactor authentication** settings and **Password reset** settings, provide a legacy way to manage some authentication methods for all users in the tenant. You can't control who uses an enabled authentication method, or how the method can be used. A [Global Administrator](../roles/permissions-reference.md#global-administrator) is needed to manage these policies. >[!Important]
->In March 2023, we announced the deprecation of managing authentication methods in the legacy multifactor authentication and self-service password reset (SSPR) policies. Beginning September 30, 2024, authentication methods can't be managed in these legacy MFA and SSPR policies. We recommend customers use the manual migration control to migrate to the Authentication methods policy by the deprecation date.
+>In March 2023, we announced the deprecation of managing authentication methods in the legacy multifactor authentication and self-service password reset (SSPR) policies. Beginning September 30, 2025, authentication methods can't be managed in these legacy MFA and SSPR policies. We recommend customers use the manual migration control to migrate to the Authentication methods policy by the deprecation date.
To manage the legacy MFA policy, select **Security** > **Multifactor authentication** > **Additional cloud-based multifactor authentication settings**.
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
Previously updated : 09/13/2023 Last updated : 09/24/2023
After you capture available authentication methods from the policies you're curr
You'll want to set this option before you make any changes as it will apply your new policy to both sign-in and password reset scenarios. The next step is to update the Authentication methods policy to match your audit. You'll want to review each method one-by-one. If your tenant is only using the legacy MFA policy, and isn't using SSPR, the update is straightforward - you can enable each method for all users and precisely match your existing policy.
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
Microsoft Entra ID combines the encrypted client key and message buffer into the
| tgt_key_type | int | The on-premises AD DS key type used for both the client key and the Kerberos session key included in the KERB_MESSAGE_BUFFER. | | tgt_message_buffer | string | Base64 encoded KERB_MESSAGE_BUFFER. |
+### Do users need to be a member of the Domain Users Active Directory group?
+Yes. A user must be in the Domain Users group to be able to sign-in using Azure AD Kerberos.
+ ## Next steps To get started with FIDO2 security keys and hybrid access to on-premises resources, see the following articles:
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
First, let's enable SMS-based authentication for your Microsoft Entra tenant.
1. Click **Enable** and select **Target users**. You can choose to enable SMS-based authentication for *All users* or *Select users* and groups.
+ > [!NOTE]
+ > To configure SMS-based authentication for first-factor (that is, to allow users to sign in with this method), check the **Use for sign-in** checkbox. Leaving this unchecked makes SMS-based authentication available for multifactor authentication and Self-Service Password Reset only.
![Enable SMS authentication in the authentication method policy window](./media/howto-authentication-sms-signin/enable-sms-authentication-method.png)
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS70016 | AuthorizationPending - OAuth 2.0 device flow error. Authorization is pending. The device will retry polling the request. | | AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization isn't approved. | | AADSTS70019 | CodeExpired - Verification code expired. Have the user retry the sign-in. |
-| AADSTS70043 | The refresh token has expired or is invalid due to sign-in frequency checks by Conditional Access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. |
+| AADSTS70043 | BadTokenDueToSignInFrequency - The refresh token has expired or is invalid due to sign-in frequency checks by Conditional Access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. |
| AADSTS75001 | BindingSerializationError - An error occurred during SAML message binding. | | AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response can't be sent via bindings other than HTTP POST). | | AADSTS75005 | Saml2MessageInvalid - Microsoft Entra doesnΓÇÖt support the SAML request sent by the app for SSO. To learn more, see the troubleshooting article for error [AADSTS75005](/troubleshoot/azure/active-directory/error-code-aadsts75005-not-a-valid-saml-request). |
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
az group create --name AzureADLinuxVM --location southcentralus
az vm create \ --resource-group AzureADLinuxVM \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--assign-identity \ --admin-username azureuser \ --generate-ssh-keys
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md
Title: "What's new in Azure Active Directory for customers" description: "New and updated documentation for the Azure Active Directory for customers documentation." Previously updated : 09/01/2023 Last updated : 09/29/2023
Welcome to what's new in Azure Active Directory for customers documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## September 2023
+
+This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+
+### Updated articles
+
+- [Quickstart: Get started with our guide to run a sample app and sign in your users (preview)](quickstart-get-started-guide.md) Start the guide updates
+- [Manage Microsoft Entra ID for customers resources with Microsoft Graph](microsoft-graph-operations.md) - Editorial updates
+- [Planning for customer identity and access management (preview)](concept-planning-your-solution.md) - Editorial updates
+- [Create a sign-up and sign-in user flow for customers](how-to-user-flow-sign-up-sign-in-customers.md) - Disable sign-up in a user flow
+ ## August 2023 ### New articles
Welcome to what's new in Azure Active Directory for customers documentation. Thi
- [Tutorial: Call a web API from your Node.js daemon application](tutorial-daemon-node-call-api-build-app.md) - Editorial review - [Tutorial: Sign in users to your .NET browserless application](tutorial-browserless-app-dotnet-sign-in-build-app.md) - Editorial review
-## June 2023
-
-### New articles
--- [Quickstart: Create a tenant (preview)](quickstart-tenant-setup.md)-- [Tutorial: Create a .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-prepare-app.md)-- [Tutorial: Register and configure .NET MAUI mobile app in a customer tenant](tutorial-mobile-app-maui-sign-in-prepare-tenant.md)-- [Tutorial: Sign in users in .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-sign-out.md)-- [Use role-based access control in your Node.js web application](how-to-web-app-role-based-access-control.md)-- [Tutorial: Handle authentication flows in a React single-page app](./tutorial-single-page-app-react-sign-in-configure-authentication.md)-- [Tutorial: Create a .NET MAUI app](tutorial-desktop-app-maui-sign-in-prepare-app.md)-- [Tutorial: Register and configure .NET MAUI app in a customer tenant](tutorial-desktop-app-maui-sign-in-prepare-tenant.md)-- [Tutorial: Sign in users in .NET MAUI app](tutorial-desktop-app-maui-sign-in-sign-out.md)-
-### Updated articles
--- [What is Microsoft Entra ID for customers?](overview-customers-ciam.md) - Added a section regarding Azure AD B2C to the overview and emphasized tenant creation when getting started-- [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest-- [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](./tutorial-single-page-app-react-sign-in-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](./tutorial-single-page-app-react-sign-in-sign-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling-- [Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](tutorial-single-page-app-react-sign-in-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](tutorial-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes-- [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](tutorial-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes-- [Tutorial: Add sign-in and sign-out to an ASP.NET web application for a customer tenant](tutorial-web-app-dotnet-sign-in-sign-out.md) - ASP.NET web app fixes-- [Collect user attributes during sign-up](how-to-define-custom-attributes.md) - Added a step for the Show more attributes pane and custom attributes-- [Manage Azure Active Directory for customers resources with Microsoft Graph](microsoft-graph-operations.md) - Combined Graph API references into one doc
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 09/01/2023 Last updated : 09/29/2023
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## September 2023
+
+This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+
+### Updated articles
+
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md) - Editorial updates
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md) - Editorial updates
+- [Overview of Microsoft Entra External ID](external-identities-overview.md) - Editorial updates
+- [Billing model for Microsoft Entra External ID](external-identities-pricing.md) - Editorial updates
+- [Microsoft Entra B2B collaboration FAQs](faq.yml) - Editorial updates
+- [Grant Microsoft Entra B2B users access to your on-premises applications](hybrid-cloud-to-on-premises.md) - Editorial updates
+- [Grant locally managed partner accounts access to cloud resources using Microsoft Entra B2B collaboration](hybrid-on-premises-to-cloud.md) - Editorial updates
+- [Microsoft Entra B2B collaboration for hybrid organizations](hybrid-organizations.md) - Editorial updates
+- [Microsoft Entra B2B collaboration invitation redemption](redemption-experience.md) - Editorial updates
+- [Self-service for Microsoft Entra B2B collaboration sign-up](self-service-portal.md) - Editorial updates
+- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md) - Editorial updates
+- [Set up tenant restrictions v2](tenant-restrictions-v2.md) - Feature availability updates
+- [Troubleshooting Microsoft Entra B2B collaboration](troubleshoot.md) - Editorial updates
+- [Properties of a Microsoft Entra B2B collaboration user](user-properties.md) - Editorial updates
+- [B2B collaboration overview](what-is-b2b.md) - Editorial updates
+- [Add Microsoft Entra ID as an identity provider for External ID](default-account.md) - Editorial updates
+- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md) - Editorial updates
+- [Add Microsoft Entra B2B collaboration users in the Microsoft Entra admin center](add-users-administrator.md) - Editorial updates
+- [Tutorial: Enforce multifactor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - Editorial updates
+- [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) - Editorial updates
+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - Editorial updates
+- [Add Facebook as an identity provider for External Identities](facebook-federation.md) - Editorial updates
+- [Add Google as an identity provider for B2B guest users](google-federation.md) - Editorial updates
+ ## August 2023 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Cross-tenant access overview](cross-tenant-access-overview.md) - New storage model update - [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) - New storage model update - [Configure B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - New storage model update--
+
## July 2023 ### New article
Welcome to what's new in Azure Active Directory External Identities documentatio
### Updated articles - [Bulk invite users via PowerShell](bulk-invite-powershell.md) - Editorial and link updates-- [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - Text corrections and screenshot updates
+- [Enforce multifactor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - Text corrections and screenshot updates
- [Invite internal users to B2B](invite-internal-users.md) - Text corrections and screenshot updates - [Grant B2B users access to local apps](hybrid-cloud-to-on-premises.md) - Text corrections - [Tenant restrictions V2](tenant-restrictions-v2.md) - Note update - [Leave an organization](leave-the-organization.md) - Screenshot update - [Use audit logs and access reviews](auditing-and-reporting.md) - B2B sponsors feature update
-## June 2023
-
-### Updated articles
-- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md) - Microsoft Teams updates-- [Invite guest users to an app](add-users-information-worker.md) - Link and structure updates
active-directory Concept Group Based Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-group-based-licensing.md
+
+ Title: What is group-based licensing
+description: Learn about Microsoft Entra group-based licensing, including how it works, key features, and best practices.
+
+keywords: Azure AD licensing
+++++++ Last updated : 09/28/2023+++
+# Customer intent: As an IT admin, I want to understand group-based licensing, so I can effectively assign licenses to users in my organization.
++
+# What is group-based licensing in Microsoft Entra ID?
+
+Microsoft paid cloud services, such as Microsoft 365, Enterprise Mobility + Security, Dynamics 365, and other similar products, require licenses. These licenses are assigned to each user who needs access to these services. To manage licenses, administrators use one of the management portals (Office or Azure) and PowerShell cmdlets. Microsoft Entra ID is the underlying infrastructure that supports identity management for all Microsoft cloud services. Microsoft Entra ID stores information about license assignment states for users.
+
+Microsoft Entra ID includes group-based licensing, which allows you to assign one or more product licenses to a group. Microsoft Entra ID ensures that the licenses are assigned to all members of the group. Any new members who join the group are assigned the appropriate licenses. When they leave the group, those licenses are removed. This licensing management eliminates the need for automating license management via PowerShell to reflect changes in the organization and departmental structure on a per-user basis.
+
+## Licensing requirements
+
+You must have one of the following licenses **for every user who benefits from** group-based licensing:
+
+- Paid or trial subscription for Microsoft Entra ID P1 and above
+
+- Paid or trial edition of Microsoft 365 Business Premium or Office 365 Enterprise E3 or Office 365 A3 or Office 365 GCC G3 or Office 365 E3 for GCCH or Office 365 E3 for DOD and above
+
+### Required number of licenses
+
+For any groups assigned a license, you must also have a license for each unique member. While you don't have to assign each member of the group a license, you must have at least enough licenses to include all of the members. For example, if you have 1,000 unique members who are part of licensed groups in your tenant, you must have at least 1,000 licenses to meet the licensing agreement.
+
+## Features
+
+Here are the main features of group-based licensing:
+
+- Licenses can be assigned to any security group in Microsoft Entra ID. Security groups can be synced from on-premises, by using [Microsoft Entra Connect](../hybrid/connect/whatis-azure-ad-connect.md). You can also create security groups directly in Microsoft Entra ID (also called cloud-only groups), or automatically via the [Microsoft Entra dynamic group feature](../enterprise-users/groups-create-rule.md).
+
+- When a product license is assigned to a group, the administrator can disable one or more service plans in the product. Typically, this assignment is done when the organization is not yet ready to start using a service included in a product. For example, the administrator might assign Microsoft 365 to a department, but temporarily disable the Yammer service.
+
+- All Microsoft cloud services that require user-level licensing are supported. This support includes all Microsoft 365 products, Enterprise Mobility + Security, and Dynamics 365.
+
+- Group-based licensing is currently available through the [Azure portal](https://portal.azure.com) and through the [Microsoft Admin center](https://admin.microsoft.com/).
+
+- Microsoft Entra ID automatically manages license modifications that result from group membership changes. Typically, license modifications are effective within minutes of a membership change.
+
+- A user can be a member of multiple groups with license policies specified. A user can also have some licenses that were directly assigned, outside of any groups. The resulting user state is a combination of all assigned product and service licenses. If a user is assigned same license from multiple sources, the license will be consumed only once.
+
+- In some cases, licenses can't be assigned to a user. For example, there might not be enough available licenses in the tenant, or conflicting services might have been assigned at the same time. Administrators have access to information about users for whom Microsoft Entra ID couldn't fully process group licenses. They can then take corrective action based on that information.
+
+## Your feedback is welcome!
+
+If you have feedback or feature requests, share them with us using [the Microsoft Entra admin forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
+
+## Next steps
+
+To learn more about other scenarios for license management through group-based licensing, see:
+
+* [Assigning licenses to a group in Microsoft Entra ID](../enterprise-users/licensing-groups-assign.md)
+* [Identifying and resolving license problems for a group in Microsoft Entra ID](../enterprise-users/licensing-groups-resolve-problems.md)
+* [How to migrate individual licensed users to group-based licensing in Microsoft Entra ID](../enterprise-users/licensing-groups-migrate-users.md)
+* [How to migrate users between product licenses using group-based licensing in Microsoft Entra ID](../enterprise-users/licensing-groups-change-licenses.md)
+* [Microsoft Entra group-based licensing additional scenarios](../enterprise-users/licensing-group-advanced.md)
+* [PowerShell examples for group-based licensing in Microsoft Entra ID](../enterprise-users/licensing-ps-examples.md)
active-directory How To Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md
Title: How to manage groups
-description: Instructions about how to manage Microsoft Entra groups and group membership.
+description: Instructions about how to create and update Microsoft Entra groups, such as membership and settings.
Last updated 09/12/2023 +
+# Customer Intent: As an IT admin, I want to learn how to create groups, add members, and adjust setting so that I can grant the right access to the right services for the right people.
+ # Manage Microsoft Entra groups and group membership
To create a basic group and add members:
1. Enter a **Group name.** Choose a name that you'll remember and that makes sense for the group. A check will be performed to determine if the name is already in use. If the name is already in use, you'll be asked to change the name of your group.
+ - The name of the group can't start with a space. Starting the name with a space prevents the group from appearing as an option for steps such as adding role assignments to group members.
+ 1. **Group email address**: Only available for Microsoft 365 group types. Enter an email address manually or use the email address built from the Group name you provided. 1. **Group description.** Add an optional description to your group.
You can remove an existing Security group from another Security group; however,
You can delete a group for any number of reasons, but typically it will be because you: -- Chose the incorrect **Group type** option.
+- Choose the incorrect **Group type** option.
- Created a duplicate group by mistake. - No longer need the group.
active-directory How To Rename Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-rename-azure-ad.md
- Last updated 09/27/2023
# Customer intent: As a content creator, employee of an organization with internal documentation for IT or identity security admins, developer of Azure AD-enabled apps, ISV, or Microsoft partner, I want to learn how to correctly update our documentation or content to use the new name for Azure AD. + # How to: Rename Azure AD Azure Active Directory (Azure AD) is being renamed to Microsoft Entra ID to better communicate the multicloud, multiplatform functionality of the product and unify the naming of the Microsoft Entra product family.
This article provides best practices and support for customers and organizations
## Prerequisites
-Before changing instances of Azure AD in your documentation or content, familiarize yourself with the guidance in [New name for Azure AD](new-name.md) to:
+Before changing instances of Azure AD in your documentation or content, familiarize yourself with the guidance in [New name for Azure AD](./new-name.md) to:
- Understand the product name and why we made the change - Download the new product icon
Update your organization's content and experiences using the relevant tools.
Use the following criteria to determine what change(s) you need to make to instances of `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`. 1. If the text string is found in the naming dictionary of previous terms, change it to the new term.
-1. If a punctuation mark follows "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD," replace with 'Microsoft Entra ID' because that's the product name.
-1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` is followed by `for`, `Premium`, `Plan`, `P1`, or `P2`, replace with `Microsoft Entra ID` because it refers to a SKU name or Service Plan.
+1. If a punctuation mark follows `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD`, replace with `Microsoft Entra ID` because that's the product name.
+1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` is followed by `for`, `Premium`, `Plan`, `P1`, or `P2`, replace with `Microsoft Entra ID` because it refers to a SKU name or Service Plan.
1. If an article (`a`, `an`, `the`) or possessive (`your`, `your organization's`) precedes (`Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`), then replace with `Microsoft Entra` because it's a feature name. For example: 1. "an Azure AD tenant" becomes "a Microsoft Entra tenant" 1. "your organization's Azure AD tenant" becomes "your Microsoft Entra tenant"
-1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` is followed by an adjective or noun not in the previous steps, then replace with `Microsoft Entra` because it's a feature name. For example,"Azure AD Conditional Access" becomes "Microsoft Entra Conditional Access," while "Azure AD tenant" becomes "Microsoft Entra tenant."
-1. Otherwise, replace `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with `Microsoft Entra ID`
+1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` is followed by an adjective or noun not in the previous steps, then replace with `Microsoft Entra` because it's a feature name. For example, `Azure AD Conditional Access` becomes `Microsoft Entra Conditional Access`, while `Azure AD tenant` becomes `Microsoft Entra tenant`.
+1. Otherwise, replace `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` with `Microsoft Entra ID`.
See the section [Glossary of updated terminology](new-name.md#glossary-of-updated-terminology) to further refine your custom logic. ### Update graphics and icons 1. Replace the Azure AD icon with the Microsoft Entra ID icon.
-1. Replace titles or text containing `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with `Microsoft Entra ID`.
+1. Replace titles or text containing `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` with `Microsoft Entra ID`.
## Sample PowerShell script You can use following PowerShell script as a baseline to rename Azure AD references in your documentation or content. This code sample: -- Scans .resx files within a specified folder and all nested folders.
+- Scans `.resx` files within a specified folder and all nested folders.
- Edits files by replacing any references to `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with the correct terminology according to [New name for Azure AD](new-name.md). Edit the baseline script according to your needs and the scope of files you need to update. You may need to account for edge cases and modify the script according to how you've defined the messages in your source files. The script is not fully automated. If you use the script as-is, you must review the outputs and may need to make additional adjustments to follow the guidance in [New name for Azure AD](new-name.md).
$terminology = @(
@{ Key = 'Azure AD seamless single sign-on'; Value = 'Microsoft Entra seamless single sign-on' }, @{ Key = 'Azure AD self-service password reset'; Value = 'Microsoft Entra self-service password reset' }, @{ Key = 'Azure AD SSPR'; Value = 'Microsoft Entra SSPR' },
- @{ Key = 'Azure AD SSPR'; Value = 'Microsoft Entra SSPR' },
@{ Key = 'Azure AD domain'; Value = 'Microsoft Entra domain' }, @{ Key = 'Azure AD group'; Value = 'Microsoft Entra group' }, @{ Key = 'Azure AD login'; Value = 'Microsoft Entra login' },
$postTransforms = @(
@{ Key = ' an ME-ID'; Value = ' a ME-ID' } @{ Key = '>An ME-ID'; Value = '>A ME-ID' } @{ Key = 'Microsoft Entra ID administration portal'; Value = 'Microsoft Entra administration portal' }
- @{ Key = 'Microsoft Entra IDvanced Threat'; Value = 'Azure Advanced Threat' }
+ @{ Key = 'Microsoft Entra ID Advanced Threat'; Value = 'Azure Advanced Threat' }
@{ Key = 'Entra ID hybrid join'; Value = 'Entra hybrid join' } @{ Key = 'Microsoft Entra ID join'; Value = 'Microsoft Entra join' } @{ Key = 'ME-ID join'; Value = 'Microsoft Entra join' } @{ Key = 'Microsoft Entra ID service principal'; Value = 'Microsoft Entra service principal' }
- @{ Key = 'DownloMicrosoft Entra Connector'; Value = 'Download connector' }
+ @{ Key = 'Download Microsoft Entra Connector'; Value = 'Download connector' }
@{ Key = 'Microsoft Microsoft'; Value = 'Microsoft' } )
$postTransforms = @(
$terminology = $terminology.GetEnumerator() | Sort-Object -Property { $_.Key.Length } -Descending $postTransforms = $postTransforms.GetEnumerator() | Sort-Object -Property { $_.Key.Length } -Descending
-# Get all resx and resjson files in the current directory and its subdirectories, ignoring .gitignored files.
-Write-Host "Getting all resx and resjson files in the current directory and its subdirectories, ignoring .gitignored files."
+# Get all resx files in the current directory and its subdirectories, ignoring .gitignored files.
+Write-Host "Getting all resx files in the current directory and its subdirectories, ignoring .gitignored files."
$gitIgnoreFiles = Get-ChildItem -Path . -Filter .gitignore -Recurse
-$targetFiles = Get-ChildItem -Path . -Include *.resx, *.resjson -Recurse
+$targetFiles = Get-ChildItem -Path . -Include *.resx -Recurse
$filteredFiles = @() foreach ($file in $targetFiles) {
foreach ($file in $targetFiles) {
$scriptPath = $MyInvocation.MyCommand.Path $filteredFiles = $filteredFiles | Where-Object { $_.FullName -ne $scriptPath }
-# This command will get all the files with the extensions .resx and .resjson in the current directory and its subdirectories, and then filter out those that match the patterns in the .gitignore file. The Resolve-Path cmdlet will find the full path of the .gitignore file, and the Get-Content cmdlet will read its content as a single string. The -notmatch operator will compare the full name of each file with the .gitignore content using regular expressions, and return only those that do not match.
+# This command will get all the files with the extensions .resx in the current directory and its subdirectories, and then filter out those that match the patterns in the .gitignore file. The Resolve-Path cmdlet will find the full path of the .gitignore file, and the Get-Content cmdlet will read its content as a single string. The -notmatch operator will compare the full name of each file with the .gitignore content using regular expressions, and return only those that do not match.
Write-Host "Found $($filteredFiles.Count) files." function Update-Terminology {
To help your customers with the transition, it's helpful to add a note: "Azure A
## Next steps -- [Stay up-to-date with what's new in Azure AD/Microsoft Entra ID](whats-new.md)
+- [Stay up-to-date with what's new in Microsoft Entra ID (formerly Azure AD)](./whats-new.md)
- [Get started using Microsoft Entra ID at the Microsoft Entra admin center](https://entra.microsoft.com/)-- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
+- [Learn more about Microsoft Entra ID with content from Microsoft Learn](/entra)
+
+<!-- docutune:ignore "Azure Active Directory" "Azure AD" "AAD" -->
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
-+ Last updated 09/27/2023
# New name for Azure Active Directory
-To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, the new name for Azure Active Directory (Azure AD) is Microsoft Entra ID.
+To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, the new name for Azure Active Directory (Azure AD) is Microsoft Entra ID.
## No interruptions to usage or service
The Microsoft Entra ID name more accurately represents the multicloud and multip
### What is Microsoft Entra?
-Microsoft Entra helps you protect all identities and secure network access everywhere. The expanded product family includes:
+The Microsoft Entra product family helps you protect all identities and secure network access everywhere. The expanded product family includes:
| Identity and access management | New identity categories | Network access | ||||
There are no changes to the identity features and functionality available in Mic
### What's changing for Microsoft 365 E5?
-In addition to the capabilities they already have, Microsoft 365 E5 customers also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra P2, currently known as Azure AD Premium P2.
+In addition to the capabilities they already have, Microsoft 365 E5 customers also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra ID P2, currently known as Azure AD Premium P2.
### What's changing for identity developer and devops experience?
Only official product names are capitalized, plus Conditional Access and My * ap
## Next steps - [How to: Rename Azure AD](how-to-rename-azure-ad.md)-- [Stay up-to-date with what's new in Azure AD/Microsoft Entra ID](whats-new.md)
+- [Stay up-to-date with what's new in Microsoft Entra ID (formerly Azure AD)](./whats-new.md)
- [Get started using Microsoft Entra ID at the Microsoft Entra admin center](https://entra.microsoft.com/)-- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
+- [Learn more about the Microsoft Entra family with content from Microsoft Learn](/entra)
+
+<!-- docutune:ignore "Azure Active Directory" "Azure AD" "AAD" "Entra ID" "Cloud Knox" "Identity Governance" -->
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## September 2023
+
+### Public Preview - Managing and Changing Passwords in My Security Info
+
+**Type:** New feature
+**Service category:** My Profile/Account
+**Product capability:** End User Experiences
+
+The My Security Info management portal ([My Sign-Ins | Security Info | Microsoft.com](https://mysignins.microsoft.com/security-info)) will now support an improved end user experience of managing passwords. Users are able to change their password, and users capable of multifactor authentication (MFA) are able to update their passwords without providing their current password.
+++
+### Public Preview - Device-bound passkeys as an authentication method
+
+**Type:** Changed feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Beginning January 2024, Microsoft Entra ID will support [device-bound passkeys](https://passkeys.dev/docs/reference/terms/#device-bound-passkey) stored on computers and mobile devices as an authentication method in preview, in addition to the existing support for FIDO2 security keys. This enables your users to perform phishing-resistant authentication using the devices that they already have.  
++
+We'll expand the existing FIDO2 authentication methods policy and end user registration experience to support this preview release. If your organization requires or prefers FIDO2 authentication using physical security keys only, then please enforce key restrictions to only allow security key models that you accept in your FIDO2 policy. Otherwise, the new preview capabilities enable your users to register for device-bound passkeys stored on Windows, macOS, iOS, and Android. Learn more about FIDO2 key restrictions [here](../authentication/howto-authentication-passwordless-security-key.md).
+++
+### General Availability - Authenticator on Android is FIPS 140 compliant
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Authenticator version and higher on Android version will be FIPS 140 compliant for all Azure AD authentications using push multi-factor authentications (MFA), Passwordless Phone Sign-In (PSI), and time-based one-time passcodes (TOTP). No changes in configuration are required in the Authenticator app or Azure portal to enable this capability. For more information, see: [Authentication methods in Microsoft Entra ID - Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md).
+++
+### General Availability - Recovery of deleted application and service principals is now available
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Identity Lifecycle Management
+
+With this release, you can now recover applications along with their original service principals, eliminating the need for extensive reconfiguration and code changes ([Learn more](../manage-apps/delete-recover-faq.yml)). It significantly improves the application recovery story and addresses a long-standing customer need. This change is beneficial to you on:
+
+- **Faster Recovery**: You can now recover their systems in a fraction of the time it used to take, reducing downtime and minimizing disruptions.
+- **Cost Savings**: With quicker recovery, you can save on operational costs associated with extended outages and labor-intensive recovery efforts.
+- **Preserved Data**: Previously lost data, such as SMAL configurations, is now retained, ensuring a smoother transition back to normal operations.
+- **Improved User Experience**: Faster recovery times translate to improved user experience and customer satisfaction, as applications are back up and running swiftly.
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - September 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Datadog](../saas-apps/datadog-provisioning-tutorial.md)
+- [Litmos](../saas-apps/litmos-provisioning-tutorial.md)
+- [Postman](../saas-apps/postman-provisioning-tutorial.md)
+- [Recnice](../saas-apps/recnice-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++
+### General Availability - Web Sign-In for Windows
+
+**Type:** Changed feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+We're thrilled to announce that as part of the Windows 11 September moment, we're releasing a new Web Sign-In experience that will expand the number of supported scenarios and greatly improve security, reliability, performance, and overall end-to-end experience for our users.
+
+Web Sign-In (WSI) is a credential provider on the Windows lock/sign-in screen for AADJ joined devices that provide a web experience used for authentication and returns an auth token back to the operating system to allow the user to unlock/sign-in to the machine.
+
+Web Sign-In was initially intended to be used for a wide range of auth credential scenarios; however, it was only previously released for limited scenarios such as: [Simplified EDU Web Sign-In](/education/windows/federated-sign-in?tabs=intune) and recovery flows via [Temporary Access Password (TAP)](../authentication/howto-authentication-temporary-access-pass.md).
+
+The underlying provider for Web Sign-In has been re-written from the ground up with security and improved performance in mind. This release moves the Web Sign-in infrastructure from the Cloud Host Experience (CHX) WebApp to a newly written Login Web Host (LWH) for the September moment. This release provides better security and reliability to support previous EDU & TAP experiences and new workflows enabling using various Auth Methods to unlock/login to the desktop.
+++
+### General Availability - Support for Microsoft admin portals in Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+When a Conditional Access policy targets the Microsoft Admin Portals cloud app, the policy is enforced for tokens issued to application IDs of the following Microsoft administrative portals:
+
+- Azure portal
+- Exchange admin center
+- Microsoft 365 admin center
+- Microsoft 365 Defender portal
+- Microsoft Entra admin center
+- Microsoft Intune admin center
+- Microsoft Purview compliance portal
+
+For more information, see: [Microsoft Admin Portals (preview)](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-admin-portals-preview).
+++ ## August 2023 ### General Availability - Tenant Restrictions V2
For more information, see: [Require an app protection policy on Windows devices
In July 2023 we've added the following 10 new applications in our App gallery with Federation support:
-[Gainsight SAML](../saas-apps/gainsight-saml-tutorial.md), [Dataddo](https://www.dataddo.com/), [Puzzel](https://www.puzzel.com/), [Worthix App](../saas-apps/worthix-app-tutorial.md), [iOps360 IdConnect](https://iops360.com/iops360-id-connect-azuread-single-sign-on/), [Airbase](../saas-apps/airbase-tutorial.md), [Couchbase Capella - SSO](../saas-apps/couchbase-capella-sso-tutorial.md), [SSO for Jama Connect®](../saas-apps/sso-for-jama-connect-tutorial.md), [mediment (メディメント)](https://mediment.jp/), [Netskope Cloud Exchange Administration Console](../saas-apps/netskope-cloud-exchange-administration-console-tutorial.md), [Uber](../saas-apps/uber-tutorial.md), [Plenda](https://app.plenda.nl/), [Deem Mobile](../saas-apps/deem-mobile-tutorial.md), [40SEAS](https://www.40seas.com/), [Vivantio](https://www.vivantio.com/), [AppTweak](https://www.apptweak.com/), [ioTORQ EMIS](https://www.iotorq.com/), [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-tutorial.md), [OptiTurn](../saas-apps/optiturn-tutorial.md), [Application Experience with Mist](https://www.mist.com/), [クラウド勤怠管理システムKING OF TIME](../saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md), [Connect1](../saas-apps/connect1-tutorial.md), [DB Education Portal for Schools](../saas-apps/db-education-portal-for-schools-tutorial.md), [SURFconext](../saas-apps/surfconext-tutorial.md), [Chengliye Smart SMS Platform](../saas-apps/chengliye-smart-sms-platform-tutorial.md), [CivicEye SSO](../saas-apps/civic-eye-sso-tutorial.md), [Colloquial](../saas-apps/colloquial-tutorial.md), [BigPanda](../saas-apps/bigpanda-tutorial.md), [Foreman](https://foreman.mn/)
+[Gainsight SAML](../saas-apps/gainsight-saml-tutorial.md), [Dataddo](https://www.dataddo.com/), [Puzzel](https://www.puzzel.com/), [Worthix App](../saas-apps/worthix-app-tutorial.md), [iOps360 IdConnect](https://iops360.com/iops360-id-connect-azuread-single-sign-on/), [Airbase](../saas-apps/airbase-tutorial.md), [Couchbase Capella - SSO](../saas-apps/couchbase-capella-sso-tutorial.md), [SSO for Jama Connect®](../saas-apps/sso-for-jama-connect-tutorial.md), [mediment (メディメント)](https://mediment.jp/), [Netskope Cloud Exchange Administration Console](../saas-apps/netskope-cloud-exchange-administration-console-tutorial.md), [Uber](../saas-apps/uber-tutorial.md), [Plenda](https://app.plenda.nl/), [Deem Mobile](../saas-apps/deem-mobile-tutorial.md), [40SEAS](https://www.40seas.com/), [Vivantio](https://www.vivantio.com/), [AppTweak](https://www.apptweak.com/), [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-tutorial.md), [OptiTurn](../saas-apps/optiturn-tutorial.md), [Application Experience with Mist](https://www.mist.com/), [クラウド勤怠管理システムKING OF TIME](../saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md), [Connect1](../saas-apps/connect1-tutorial.md), [DB Education Portal for Schools](../saas-apps/db-education-portal-for-schools-tutorial.md), [SURFconext](../saas-apps/surfconext-tutorial.md), [Chengliye Smart SMS Platform](../saas-apps/chengliye-smart-sms-platform-tutorial.md), [CivicEye SSO](../saas-apps/civic-eye-sso-tutorial.md), [Colloquial](../saas-apps/colloquial-tutorial.md), [BigPanda](../saas-apps/bigpanda-tutorial.md), [Foreman](https://foreman.mn/)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use entitlement management and assign users to access packages, you must have
## View assignments programmatically ### View assignments with Microsoft Graph
-You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve assignments across all catalogs.
+You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve assignments across all catalogs.
+
+Microsoft Graph will return the results in pages, and will continue to return a reference to the next page of results in the `@odata.nextLink` property with each response, until all pages of the results have been read. To read all results, you must continue to call Microsoft Graph with the `@odata.nextLink` property returned in each response until the `@odata.nextLink` property is no longer returned, as described in [paging Microsoft Graph data in your app](/graph/paging).
+
+While an identity governance administrator can retrieve access packages from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`.
### View assignments with PowerShell
-You can perform this query in PowerShell with the `Get-MgEntitlementManagementAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet.
+You can also retrieve assignments to an access package in PowerShell with the `Get-MgEntitlementManagementAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0 to retrieve all assignments to a particular access package. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet. Be sure when using the `Get-MgEntitlementManagementAccessPackage` cmdlet to include the `-All` flag to cause all pages of assignments to be returned.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.Read.All" $accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayName eq 'Marketing Campaign'"
+if ($null -eq $accesspackage) { throw "no access package"}
$assignments = @(Get-MgEntitlementManagementAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop) $assignments | ft Id,state,{$_.Target.id},{$_.Target.displayName} ```
+Note that the preceding query will return expired and delivering assignments as well as delivered assignments. If you wish to exclude expired or delivering assignments, you can use a filter that includes the access package ID as well as the state of the assignments. This script illustrates using a filter to retrieve only the assignments in state `Delivered` for a particular access package. The script will then generate a CSV file `assignments.csv`, with one row per assignment.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.Read.All"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayName eq 'Marketing Campaign'"
+if ($null -eq $accesspackage) { throw "no access package"}
+$accesspackageId = $accesspackage.Id
+$filter = "accessPackage/id eq '" + $accesspackageId + "' and state eq 'Delivered'"
+$assignments = @(Get-MgEntitlementManagementAssignment -Filter $filter -ExpandProperty target -All -ErrorAction Stop)
+$sp = $assignments | select-object -Property Id,{$_.Target.id},{$_.Target.ObjectId},{$_.Target.DisplayName},{$_.Target.PrincipalName}
+$sp | Export-Csv -Encoding UTF8 -NoTypeInformation -Path ".\assignments.csv"
+```
++ ## Directly assign a user In some cases, you might want to directly assign specific users to an access package so that users don't have to go through the process of requesting the access package. To directly assign users, the access package must have a policy that allows administrator direct assignments.
You can assign a user to an access package in PowerShell with the `New-MgEntitle
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty assignmentpolicies
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentpolicies"
+if ($null -eq $accesspackage) { throw "no access package"}
$policy = $accesspackage.AssignmentPolicies[0] $userid = "cdbdf152-82ce-479c-b5b8-df90f561d5c7" $params = @{
Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"
$members = @(Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15" -All) $accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies"
+if ($null -eq $accesspackage) { throw "no access package"}
$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgBetaEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members ```
If you wish to add an assignment for a user who is not yet in your directory, yo
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All" $accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies"
+if ($null -eq $accesspackage) { throw "no access package"}
$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgBetaEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" ```
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
If you have a set of users whose requests are in the "Partially Delivered" or "F
### View requests with Microsoft Graph You can also retrieve requests for an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignmentRequests](/graph/api/entitlementmanagement-list-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access package requests from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$expand=accessPackage&$filter=accessPackage/id eq '9bbe5f7d-f1e7-4eb1-a586-38cdf6f8b1ea'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve requests across all catalogs.
+Microsoft Graph will return the results in pages, and will continue to return a reference to the next page of results in the `@odata.nextLink` property with each response, until all pages of the results have been read. To read all results, you must continue to call Microsoft Graph with the `@odata.nextLink` property returned in each response until the `@odata.nextLink` property is no longer returned, as described in [paging Microsoft Graph data in your app](/graph/paging).
+ ## Remove request (Preview) You can also remove a completed request that is no longer needed. To remove a request:
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
You can also add a resource to a catalog in PowerShell with the `New-MgEntitleme
Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Group.ReadWrite.All" $g = Get-MgGroup -Filter "displayName eq 'Marketing'"
+if ($null -eq $g) {throw "no group" }
$catalog = Get-MgEntitlementManagementCatalog -Filter "displayName eq 'Marketing'"
+if ($null -eq $catalog) { throw "no catalog" }
$params = @{ requestType = "adminAdd" resource = @{
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
> [!NOTE] > Manually setting the employeeLeaveDateTime for cloud-only users requires special permissions. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
-This document explains how to set up synchronization from on-premises Microsoft Entra Connect cloud sync and Microsoft Entra Connect for the required attributes.
+This document explains how to set up synchronization from on-premises Microsoft Entra Connect cloud sync or Microsoft Entra Connect for the required attributes.
>[!NOTE]
-> There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're importing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string.
+> There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're synchronizing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string.
## Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting
To update this mapping, you'd do the following:
1. Add your source attribute(s) created as Type String, and select on the CheckBox for required. :::image type="content" source="media/how-to-lifecycle-workflow-sync-attributes/edit-attribute-list.png" alt-text="Screenshot of source api list."::: > [!NOTE]
- > The number, and name, of source attributes added will depend on which attributes you are syncing.
+ > The number, and name, of source attributes added will depend on which attributes you are syncing from Active Directory.
1. Select Save. 1. From there you must map the HRM attributes to the added Active Directory attributes. To do this, Add New Mapping using an Expression.
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
As an admin, the following options exist for you to determine how users consent
- Disable user consent. For example, a high school may want to turn off user consent so that the school IT administration has full control over all the applications that are used in their tenant. - Allow users to consent to the required permissions. It's NOT recommended to keep user consent open if you have sensitive data in your tenant. - If you still want to retain admin-only consent for certain permissions but want to assist your end-users in onboarding their application, you can use the admin consent workflow to evaluate and respond to admin consent requests. This way, you can have a queue of all the requests for admin consent for your tenant and can track and respond to them directly through the Microsoft Entra admin center.
-To learn how to configure the admin consent workflow, see [configure-admin-consent-workflow.md](configure-admin-consent-workflow.md).
+To learn how to configure the admin consent workflow, see [Configure the admin consent workflow](configure-admin-consent-workflow.md).
## How the admin consent workflow works
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Create a Linux virtual machine with a user assigned managed identity specified.
```powershell New-AzVm ` -Name "<Linux VM name>" `
- -image CentOS
+ -image CentOS85Gen2
-ResourceGroupName "<Your resource group>" ` -Location "East US" ` -VirtualNetworkName "myVnet" `
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
Microsoft Entra role-assignable group feature is not part of Microsoft Entra Pri
## Relationship between role-assignable groups and PIM for Groups
-Groups can be role-assignable or non-role-assignable. The group can be enabled in PIM for Groups or not enabled in PIM for Groups. These are independent properties of the group. Any Microsoft Entra security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group doesn't have to be role-assignable group to be enabled in PIM for Groups.
+Groups in Azure AD can be classified as either role-assignable or non-role-assignable. Additionally, any group can be enabled or not enabled for use with Azure AD Privileged Identity Management (PIM) for Groups. These are independent properties of the group. Any Microsoft Entra security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group doesn't have to be role-assignable group to be enabled in PIM for Groups.
If you want to assign a Microsoft Entra role to a group, it has to be role-assignable. Even if you don't intend to assign a Microsoft Entra role to the group but the group provides access to sensitive resources, it is still recommended to consider creating the group as role-assignable. This is because of extra protections role-assignable groups have ΓÇô see [ΓÇ£What are Microsoft Entra role-assignable groups?ΓÇ¥](#what-are-entra-id-role-assignable-groups) in the section above.
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
- Title: Logs available for streaming to endpoints from Microsoft Entra ID
+ Title: Logs available for streaming from Microsoft Entra ID
description: Learn about the Microsoft Entra logs available for streaming to an endpoint for storage, analysis, or monitoring.
Previously updated : 08/09/2023 Last updated : 09/28/2023 -+
+# Customer Intent: As an IT admin, I want to know what logs are available for streaming to an endpoint from Microsoft Entra ID so that I can choose the best option for my organization.
-# Learn about the identity logs you can stream to an endpoint
+# What are the identity logs you can stream to an endpoint?
-Using Diagnostic settings in Microsoft Entra ID, you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
+Using Microsoft Entra diagnostic settings, you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
-This article describes the logs that you can route to an endpoint from Microsoft Entra Diagnostic settings.
+This article describes the logs that you can route to an endpoint with Microsoft Entra diagnostic settings.
-## Prerequisites
+## Log streaming requirements and options
-Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Microsoft Entra tenant.
+Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Microsoft Entra tenant.
-To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles.
+To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles:
- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md) - [Archive logs to a storage account](howto-archive-logs-to-storage-account.md)
To help decide which log routing option is best for you, see [How to access acti
## Activity log options
-The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal.
+The following logs can be routed to an endpoint for storage, analysis, or monitoring.
### Audit logs
The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you
### Microsoft Graph activity logs
-The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in private preview. The logs are visible in Microsoft Entra ID, but selecting these options won't add new logs to your workspace unless your organization was included in the private preview.
+The `MicrosoftGraphActivityLogs` provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs.
+
+The feature is currently in public preview. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
### Network access traffic logs
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Previously updated : 08/31/2023 Last updated : 09/26/2023 -+ # What are Microsoft Entra sign-in logs?
In addition to the default fields, the interactive sign-in log also shows:
**Non-interactive sign-ins on the interactive sign-in logs**
-Previously, some non-interactive sign-ins from Microsoft Exchange clients were included in the interactive user sign-in log for better visibility. This increased visibility was necessary before the non-interactive user sign-in logs were introduced in November 2020. However, it's important to note that some non-interactive sign-ins, such as those using FIDO2 keys, may still be marked as interactive due to the way the system was set up before the separate non-interactive logs were introduced. These sign-ins may display interactive details like client credential type and browser information, even though they are technically non-interactive sign-ins.
+Previously, some non-interactive sign-ins from Microsoft Exchange clients were included in the interactive user sign-in log for better visibility. This increased visibility was necessary before the non-interactive user sign-in logs were introduced in November 2020. However, it's important to note that some non-interactive sign-ins, such as those using FIDO2 keys, may still be marked as interactive due to the way the system was set up before the separate non-interactive logs were introduced. These sign-ins may display interactive details like client credential type and browser information, even though they're technically non-interactive sign-ins.
**Passthrough sign-ins**
-Microsoft Entra ID issues tokens for authentication and authorization. In some situations, a user who is signed in to the Contoso tenant may try to access resources in the Fabrikam tenant, where they don't have access. A no-authorization token, called a passthrough token, is issued to the Fabrikam tenant. The passthrough token doesn't allow the user to access any resources.
+Microsoft Entra ID issues tokens for authentication and authorization. In some situations, a user who is signed in to the Contoso tenant may try to access resources in the Fabrikam tenant, where they don't have access. A no-authorization token called a passthrough token, is issued to the Fabrikam tenant. The passthrough token doesn't allow the user to access any resources.
When reviewing the logs for this situation, the sign-in logs for the home tenant (in this scenario, Contoso) don't show a sign-in attempt because the token wasn't evaluated against the home tenant's policies. The sign-in token was only used to display the appropriate failure message. You won't see a sign-in attempt in the logs for the home tenant.
+**First-party, app-only service principal sign-ins**
+
+The service principal sign-in logs don't include first-party, app-only sign-in activity. This type of activity happens when first-party apps get tokens for an internal Microsoft job where there's no direction or context from a user. We exclude these logs so you're not paying for logs related to internal Microsoft tokens within your tenant.
+
+You may identify Microsoft Graph events that don't correlate to a service principal sign-in if you're routing `MicrosoftGraphActivityLogs` with `SignInLogs` to the same Log Analytics workspace. This integration allows you to cross reference the token issued by the Microsoft Graph activity with the sign-in. The `UniqueTokenIdentifier` in the Microsoft Graph activity logs would be missing from the service principal sign-in logs.
+ ### Non-interactive user sign-ins
-Non-interactive sign-ins are done *on behalf of a* user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, Microsoft Entra ID recognizes when the user's token needs to be refreshed and does so behind the scenes, without interrupting the user's session. In general, the user perceives these sign-ins as happening in the background.
+Non-interactive sign-ins are done *on behalf of a* user. These delegated sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, Microsoft Entra ID recognizes when the user's token needs to be refreshed and does so behind the scenes, without interrupting the user's session. In general, the user perceives these sign-ins as happening in the background.
![Screenshot of the non-interactive user sign-ins log.](media/concept-sign-ins/sign-in-logs-user-noninteractive.png)
To make it easier to digest the data, non-interactive sign-in events are grouped
:::image type="content" source="media/concept-sign-ins/aggregate-sign-in.png" alt-text="Screenshot of an aggregate sign-in expanded to show all rows." lightbox="media/concept-sign-ins/aggregate-sign-in-expanded.png":::
-When Microsoft Entra ID logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) has a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
+When Microsoft Entra ID logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) has a value greater than one in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
Sign-ins are aggregated in the non-interactive users when the following data matches:
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
Previously updated : 08/24/2023 Last updated : 09/28/2023 -+ # How To: Manage inactive user accounts
The following details relate to the `lastSignInDateTime` property.
- The last attempted sign-in of a user took place before April 2020. - The affected user account was never used for a sign-in attempt. -- The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
+- The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user. It may take up to 24 hours to update.
## How to investigate a single user
If you need to view the latest sign-in activity for a user, you can view the use
![Screenshot of the user overview page with the sign-in activity tile highlighted.](media/howto-manage-inactive-user-accounts/last-sign-activity-tile.png)
-The last sign-in date and time shown on this tile may take up to 6 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
+The last sign-in date and time shown on this tile may take up to 24 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Microsoft Entra built-in roles you can assign to allow ma
> | [Teams Communications Support Specialist](#teams-communications-support-specialist) | Can troubleshoot communications issues within Teams using basic tools. | fcf91098-03e3-41a9-b5ba-6f0ec8188a12 | > | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 | > | [Tenant Creator](#tenant-creator) | Create new Microsoft Entra or Azure AD B2C tenants. | 112ca1a2-15ad-4102-995e-45b0bc479a6a |
-> | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e |
+> | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Read Usage reports and Adoption Score, but can't access user details. | 75934031-6c7e-415a-99d7-48dbd49e875e |
> | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins.<br/>[![Privileged label icon.](./medi) | fe930be7-5e62-47db-91af-98c3a49a38b1 | > | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 | > | [Viva Goals Administrator](#viva-goals-administrator) | Manage and configure all aspects of Microsoft Viva Goals. | 92b086b3-e367-4ef2-b869-1de128fb986e |
Assign the Tenant Creator role to users who need to do the following tasks:
## Usage Summary Reports Reader
-Users with this role can access tenant level aggregated data and associated insights in Microsoft 365 admin center for Usage and Productivity Score but cannot access any user level details or insights. In Microsoft 365 admin center for the two reports, we differentiate between tenant level aggregated data and user level details. This role gives an extra layer of protection on individual user identifiable data, which was requested by both customers and legal teams.
+Assign the Usage Summary Reports Reader role to users who need to do the following tasks in the Microsoft 365 admin center:
+
+- View the Usage reports and Adoption Score
+- Read organizational insights, but not personally identifiable information (PII) of users
+
+This role only allows users to view organizational-level data with the following exceptions:
+
+- Member users can view user management data and settings.
+- Guest users assigned this role can not view user management data and settings.
> [!div class="mx-tableFixed"] > | Actions | Description |
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Azure Advisor helps you ensure and improve the continuity of your business-criti
1. On the **Advisor** dashboard, select the **Reliability** tab.
-## FarmBeats
+## FarmBeats / Azure Data Manager for Agriculture (ADMA)
### Upgrade to the latest FarmBeats API version
We have identified calls to a FarmBeats API version that is scheduled for deprec
Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-## API Management
+### Upgrade to the latest ADMA Java SDK version
-### Hostname certificate rotation failed
+We have identified calls to an Azure Data Manager for Agriculture (ADMA) Java SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service will not be able to retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+Learn more about [Azure FarmBeats - FarmBeatsJavaSdkVersion (Upgrade to the latest ADMA Java SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+### Upgrade to the latest ADMA DotNet SDK version
+
+We have identified calls to an ADMA DotNet SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsDotNetSdkVersion (Upgrade to the latest ADMA DotNet SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+### Upgrade to the latest ADMA JavaScript SDK version
+
+We have identified calls to an ADMA JavaScript SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsJavaScriptSdkVersion (Upgrade to the latest ADMA JavaScript SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+### Upgrade to the latest ADMA Python SDK version
+
+We have identified calls to an ADMA Python SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsPythonSdkVersion (Upgrade to the latest ADMA Python SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+## API Management
-Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
### SSL/TLS renegotiation blocked
-SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions will return 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions returns 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+
+Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
-Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](../api-management/api-management-howto-mutual-certificates-for-clients.md).
+### Hostname certificate rotation failed
+
+API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service cannot retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
## App
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on yo
### Upgrade the standard disks attached to your premium-capable VM to premium disks
-We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Chec
### Access to mandatory URLs missing for your Azure Virtual Desktop environment
-In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting the "Learn More" link, you will be able to see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
+In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
Learn more about [Azure Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgra
Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions. > [!NOTE]
-> Additional regions will incur extra costs.
+> Additional regions incur extra costs.
Learn more about [Azure Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](../cosmos-db/high-availability.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgra
### Upgrade your Azure Fluid Relay client library
-You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading will provide the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure F
### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting July 1, 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting July 1, 2020, you can't create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka). ### Deprecation of Older Spark Versions in HDInsight Spark cluster
-Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft.
+Starting July 1, 2020, you can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters run as is without support from Microsoft.
Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark). ### Enable critical updates to be applied to your HDInsight clusters
-HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and re
### Apply critical updates to your HDInsight clusters
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
-You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) will be retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 will be deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
+You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quo
### Upgrade your SKU or add more instances to ensure fault tolerance
-Deploying two or more medium or large sized instances will ensure business continuity during outages caused by planned or unplanned maintenance.
+Deploying two or more medium or large sized instances ensures business continuity during outages caused by planned or unplanned maintenance.
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more instances to ensure fault tolerance)](https://aka.ms/aa_gatewayrec_learnmore).
Learn more about [Traffic Manager profile - GeneralProfile (Add at least one mor
### Add an endpoint configured to "All (World)"
-For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles will avoid traffic black holing and guarantee service remains available.
+For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available.
Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5). ### Add or move one endpoint to another Azure region
-All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
+All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region improves overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Imple
### Avoid hostname override to ensure site integrity
-Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
+Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST APIs) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
You appear to have ExpressRoute circuits peered in at least two different locati
Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](../expressroute/about-upgrade-circuit-bandwidth.md).
-### Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule
+### Azure WAF RuleSet CRS 3.1/3.2 has been updated with Log4j 2 vulnerability rule
-In response to log4j2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
+In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
-### Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228)
+### Additional protection to mitigate Log4j 2 vulnerability (CVE-2021-44228)
-To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
+To mitigate the impact of Log4j 2 vulnerability, we recommend these steps:
-1) Upgrade Log4j2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
+1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
Learn more about [Virtual network - natGateway (Use NAT gateway for outbound con
### Enable Active-Active gateways for redundancy
-In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
+In active-active configuration, both instances of the VPN gateway establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is switched over to the other active IPsec tunnel automatically.
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Rest
### You are close to exceeding storage quota of 2GB. Create a Standard search service.
-You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations will stop working when storage quota is exceeded.
+You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity). ### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.
-You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations will stop working when storage quota is exceeded.
+You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity). ### You are close to exceeding your available storage quota. Add additional partitions if you need more storage.
-You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations will no longer work.
+You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
ai-services Cognitive Services And Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-and-machine-learning.md
Last updated 10/28/2021
Azure AI services provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services.
-[Azure AI services](./what-are-ai-services.md) is a group of services, each supporting different, generalized prediction capabilities. The services are divided into different categories to help you find the right service.
-
-|Service category|Purpose|
-|--|--|
-|[Decision](https://azure.microsoft.com/services/cognitive-services/directory/decision/)|Build apps that surface recommendations for informed and efficient decision-making.|
-|[Language](https://azure.microsoft.com/services/cognitive-services/directory/lang/)|Allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want.|
-|[Search](https://azure.microsoft.com/services/cognitive-services/directory/search/)|Add Bing Search APIs to your apps and harness the ability to comb billions of webpages, images, videos, and news with a single API call.|
-|[Speech](https://azure.microsoft.com/services/cognitive-services/directory/speech/)|Convert speech into text and text into natural-sounding speech. Translate from one language to another and enable speaker verification and recognition.|
-|[Vision](https://azure.microsoft.com/services/cognitive-services/directory/vision/)|Recognize, identify, caption, index, and moderate your pictures, videos, and digital ink content.|
+[Azure AI services](./what-are-ai-services.md) is a group of services, each supporting different, generalized prediction capabilities.
Use Azure AI services when you:
The following data categorizes each service by which kind of data it allows or r
The services are used in any application that can make REST API(s) or SDK calls. Examples of applications include web sites, bots, virtual or mixed reality, desktop and mobile applications.
-## How is Azure Cognitive Search related to Azure AI services?
-
-[Azure Cognitive Search](../search/search-what-is-azure-search.md) is a separate cloud search service that optionally uses Azure AI services to add image and natural language processing to indexing workloads. Azure AI services is exposed in Azure Cognitive Search through [built-in skills](../search/cognitive-search-predefined-skills.md) that wrap individual APIs. You can use a free resource for walkthroughs, but plan on creating and attaching a [billable resource](../search/cognitive-search-attach-cognitive-services.md) for larger volumes.
- ## How can you use Azure AI services? Each service provides information about your data. You can combine services together to chain solutions such as converting speech (audio) to text, translating the text into many languages, then using the translated languages to get answers from a knowledge base. While Azure AI services can be used to create intelligent solutions on their own, they can also be combined with traditional machine learning projects to supplement models or accelerate the development process.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
Azure OpenAI Service is powered by a diverse set of models with different capabi
GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
-To request access to GPT-4, Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
- - `gpt-4` - `gpt-4-32k`
You can also use the Whisper model via Azure AI Speech [batch transcription](../
### GPT-4 models
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Availability varies by region. If you don't see GPT-4 in your region, please check back later.
+ These models can only be used with the Chat Completion API. | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 32,768 | September 2021 |
-| `gpt-4` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, Sweden Central, Switzerland North, UK South | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, Sweden Central, Switzerland North, UK South | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A | 32,768 | September 2021 |
+| `gpt-4` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 32,768 | September 2021 |
-<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br>
+<sup>1</sup> Due to high demand, availability is limited in the region<br>
<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
-<sup>3</sup> We are rolling out availability of new regions to customers gradually to ensure a smooth experience. In East US and France Central, customers with existing deployments of GPT-4 can create additional deployments of GPT-4 version 0613. For customers new to GPT-4 on Azure OpenAI, please use one of the other available regions.
### GPT-3.5 models
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Prompt construction can be difficult. In practice, the prompt acts to configure
The service provides users access to several different models. Each model provides a different capability and price point.
-GPT-4 models are the latest available models. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
- The DALL-E models, currently in preview, generate images from text prompts that the user provides. The Whisper models, currently in preview, can be used to transcribe and translate speech to text.
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
In this tutorial, you learn how to:
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
-* Access granted to Azure OpenAI in the desired Azure subscription
+* Access granted to Azure OpenAI in the desired Azure subscription.
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. * <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a> * The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, pandas, tiktoken.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
keywords:
## September 2023
+### GPT-4
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to request access to use GPT-4 and GPT-4-32k. Availability may be limited by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+ ### GPT-3.5 Turbo Instruct Azure OpenAI Service now supports the GPT-3.5 Turbo Instruct model. This model has performance comparable to `text-davinci-003` and is available to use with the Completions API. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whispe
- General availability support for: - Chat Completion API version `2023-05-15`. - GPT-35-Turbo models.
- - GPT-4 model series. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+ - GPT-4 model series.
If you are currently using the `2023-03-15-preview` API, we recommend migrating to the GA `2023-05-15` API. If you are currently using API version `2022-12-01` this API remains GA, but does not include the latest Chat Completion capabilities.
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
To use a Whisper model for batch transcription, you also need to set the `model`
Whisper models via batch transcription are supported in the East US, Southeast Asia, and West Europe regions. ::: zone pivot="rest-api"
-You can make a [Models_ListBaseModels](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/Speech/SpeechToText/preview/v3.2-preview.1) request to get available base models for all locales.
+You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview1/operations/Models_ListBaseModels) request to get available base models for all locales.
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
Make an HTTP GET request as shown in the following example for the `eastus` regi
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
+By default only the 100 oldest base models are returned, so you can use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
+
+```azurecli-interactive
+curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base?skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+``````
+ ::: zone-end ::: zone pivot="speech-cli"
ai-services Migrate V3 1 To V3 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-1-to-v3-2.md
Added token count and token error properties to the `EvaluationProperties` prope
- `tokenInsertionCount2`: The number of recognized tokens by model2 that are insertions. - `tokenSubstitutionCount2`: The number of recognized words by model2 that are substitutions.
-### Model copy
-
-Added the new `"/operations/models/copy/{id}"` operation. Used for copy models scenario.
-
-Added the new `"/models/{id}:copy"` operation. Schema in the new copy operation: `"$ref": "#/definitions/ModelCopyAuthorization"` Deprecated the `"/models/{id}:copyto"` operation. Schema in the deprecated copy operation: `"$ref": "#/definitions/ModelCopy"`
-
-Added the new `"/models:authorizecopy"` operation returns `"$ref": "#/definitions/ModelCopyAuthorization"`. This returned entity can be used in the new `"/models/{id}:copy"` operation.
-
-New entity definitions related to model copy authorization:
-- `ModelCopyAuthorization`-- `ModelCopyAuthorizationDefinition`: The Azure Resource ID of the source speech resource.-
-```json
-"ModelCopyAuthorization": {
- "title": "ModelCopyAuthorization",
- "required": [
- "expirationDateTime",
- "id",
- "sourceResourceId",
- "targetResourceEndpoint",
- "targetResourceId",
- "targetResourceRegion"
- ],
- "type": "object",
- "properties": {
- "targetResourceRegion": {
- "description": "The region (aka location) of the target speech resource (e.g., westus2).",
- "minLength": 1,
- "type": "string"
- },
- "targetResourceId": {
- "description": "The Azure Resource ID of the target speech resource.",
- "minLength": 1,
- "type": "string"
- },
- "targetResourceEndpoint": {
- "description": "The endpoint (base url) of the target resource (with custom domain name when it is used).",
- "minLength": 1,
- "type": "string"
- },
- "sourceResourceId": {
- "description": "The Azure Resource ID of the source speech resource.",
- "minLength": 1,
- "type": "string"
- },
- "expirationDateTime": {
- "format": "date-time",
- "description": "The expiration date of this copy authorization.",
- "type": "string"
- },
- "id": {
- "description": "The ID of this copy authorization.",
- "minLength": 1,
- "type": "string"
- }
- }
-},
-```
-
-```json
-"ModelCopyAuthorizationDefinition": {
- "title": "ModelCopyAuthorizationDefinition",
- "required": [
- "sourceResourceId"
- ],
- "type": "object",
- "properties": {
- "sourceResourceId": {
- "description": "The Azure Resource ID of the source speech resource.",
- "minLength": 1,
- "type": "string"
- }
- }
-},
-```
-
-### CustomModelLinks copy properties
-
-New `copy` property
-copyTo URI: The location to the obsolete model copy action. See operation \"Models_CopyTo\" for more details.
-copy URI: The location to the model copy action. See operation \"Models_Copy\" for more details.
-
-```json
-"CustomModelLinks": {
- "title": "CustomModelLinks",
- "type": "object",
- "properties": {
- "copyTo": {
- "format": "uri",
- "description": "The location to the obsolete model copy action. See operation \"Models_CopyTo\" for more details.",
- "type": "string",
- "readOnly": true
- },
- "copy": {
- "format": "uri",
- "description": "The location to the model copy action. See operation \"Models_Copy\" for more details.",
- "type": "string",
- "readOnly": true
- },
- "files": {
- "format": "uri",
- "description": "The location to get all files of this entity. See operation \"Models_ListFiles\" for more details.",
- "type": "string",
- "readOnly": true
- },
- "manifest": {
- "format": "uri",
- "description": "The location to get a manifest for this model to be used in the on-prem container. See operation \"Models_GetCustomModelManifest\" for more details.",
- "type": "string",
- "readOnly": true
- }
- },
- "readOnly": true
-},
-```
- ## Operation IDs You must update the base path in your code from `/speechtotext/v3.1` to `/speechtotext/v3.2-preview.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base`.
ai-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md
Azure AI Speech is updated on an ongoing basis. To stay up-to-date with recent d
## Recent highlights
+* Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#using-whisper-models) guide.
* [Speech to text REST API version 3.2](./migrate-v3-1-to-v3-2.md) is available in public preview. * Speech SDK 1.32.1 was released in September 2023. * [Real-time diarization](./get-started-stt-diarization.md) is in public preview.
-* Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription.
-* Text to speech [Batch synthesis API](./batch-synthesis.md) is available in public preview.
## Release notes
ai-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-sdk.md
Title: "Document Translation C#/.NET or Python client library"
-description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
+description: Use the Document Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
-- Previously updated : 07/18/2023++ Last updated : 09/28/2023 zone_pivot_groups: programming-languages-document-sdk
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA | | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | Until 1.29 GA | | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA |
-| 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2026 | Until 1.31 GA |
+| 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA |
| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || Until 1.32 GA| *\* Indicates the version is designated for Long Term Support*
aks Vertical Pod Autoscaler Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler-api-reference.md
+
+ Title: Vertical Pod Autoscaler API reference in Azure Kubernetes Service (AKS)
+description: Learn about the Vertical Pod Autoscaler API reference for Azure Kubernetes Service (AKS).
++ Last updated : 09/26/2023++
+# Vertical Pod Autoscaler API reference
+
+This article provides the API reference for the Vertical Pod Autoscaler feature of Azure Kubernetes Service.
+
+This reference is based on version 0.13.0 of the AKS implementation of VPA.
+
+## VerticalPodAutoscaler
+
+|Name |Ojbect |Description |
+|-|-||-|
+|metadata |ObjectMeta | Standard [object metadata][object-metadata-ref].|
+|spec |VerticalPodAutoscalerSpec |The desired behavior of the Vertical Pod Autoscaler.|
+|status |VerticalPodAutoscalerStatus |The most recently observed status of the Vertical Pod Autoscaler. |
+
+## VerticalPodAutoscalerSpec
+
+|Name |Ojbect |Description |
+|-|-||-|
+|targetRef |CrossVersionObjectReference | Reference to the controller managing the set of pods for the autoscaler to control. For example, a Deployment or a StatefulSet. You can point a Vertical Pod Autoscaler at any controller that has a [Scale][scale-ref] subresource. Typically, the Vertical Pod Autoscaler retrieves the pod set from the controller's ScaleStatus. |
+|updatePolicy |PodUpdatePolicy |Specifies whether recommended updates are applied when a pod is started and whether recommended updates are applied during the life of a pod. |
+|resourcePolicy |PodResourcePolicy |Specifies policies for how CPU and memory requests are adjusted for individual containers. The resource policy can be used to set constraints on the recommendations for individual containers. If not specified, the autoscaler computes recommended resources for all containers in the pod, without additional constraints.|
+|recommenders |VerticalPodAutoscalerRecommenderSelector |Recommender is responsible for generating recommendation for the VPA object. Leave empty to use the default recommender. Otherwise the list can contain exactly one entry for a user-provided alternative recommender. |
+
+## VerticalPodAutoscalerList
+
+|Name |Ojbect |Description |
+|-|-||-|
+|metadata |ObjectMeta |Standard [object metadata][object-metadata-ref]. |
+|items |VerticalPodAutoscaler (array) |A list of Vertical Pod Autoscaler objects. |
+
+## PodUpdatePolicy
+
+|Name |Ojbect |Description |
+|-|-||-|
+|updateMode |string |A string that specifies whether recommended updates are applied when a pod is started and whether recommended updates are applied during the life of a pod. Possible values are `Off`, `Initial`, `Recreate`, and `Auto`. The default is `Auto` if you don't specify a value. |
+|minReplicas |int32 |A value representing the minimal number of replicas which need to be alive for Updater to attempt pod eviction (pending other checks like Pod Disruption Budget). Only positive values are allowed. Defaults to global `--min-replicas` flag, which is set to `2`. |
+
+## PodResourcePolicy
+
+|Name |Ojbect |Description |
+|-|-||-|
+|conainerPolicies |ContainerResourcePolicy |An array of resource policies for individual containers. There can be at most one entry for every named container, and optionally a single wildcard entry with `containerName = '*'`, which handles all containers that do not have individual policies. |
+
+## ContainerResourcePolicy
+
+|Name |Ojbect |Description |
+|-|-||-|
+|containerName |string |A string that specifies the name of the container that the policy applies to. If not specified, the policy serves as the default policy. |
+|mode |ContainerScalingMode |Specifies whether recommended updates are applied to the container when it is started and whether recommended updates are applied during the life of the container. Possible values are `Off` and `Auto`. The default is `Auto` if you don't specify a value. |
+|minAllowed |ResourceList |Specifies the minimum CPU request and memory request allowed for the container. By default, there is no minimum applied. |
+|maxAllowed |ResourceList |Specifies the maximum CPU request and memory request allowed for the container. By default, there is no maximum applied. |
+|ControlledResources |[]ResourceName |Specifies the type of recommendations that are computed (and possibly applied) by the Vertical Pod Autoscaler. If empty, the default of [ResourceCPU, ResourceMemory] is used. |
+
+## VerticalPodAutoscalerRecommenderSelector
+
+|Name |Ojbect |Description |
+|-|-||-|
+|name |string |A string that specifies the name of the recommender responsible for generating recommendation for this object. |
+
+## VerticalPodAutoscalerStatus
+
+|Name |Ojbect |Description |
+|-|-||-|
+|recommendation |RecommendedPodResources |The most recently recommended CPU and memory requests. |
+|conditions |VerticalPodAutoscalerCondition | An array that describes the current state of the Vertical Pod Autoscaler. |
+
+## RecommendedPodResources
+
+|Name |Ojbect |Description |
+|-|-||-|
+|containerRecommendation |RecommendedContainerResources |An array of resources recommendations for individual containers. |
+
+## RecommendedContainerResources
+
+|Name |Ojbect |Description |
+|-|-||-|
+|containerName |string| A string that specifies the name of the container that the recommendation applies to. |
+|target |ResourceList |The recommended CPU request and memory request for the container. |
+|lowerBound |ResourceList |The minimum recommended CPU request and memory request for the container. This amount is not guaranteed to be sufficient for the application to be stable. Running with smaller CPU and memory requests is likely to have a significant impact on performance or availability. |
+|upperBound |ResourceList |The maximum recommended CPU request and memory request for the container. CPU and memory requests higher than these values are likely to be wasted. |
+|uncappedTarget |ResourceList |The most recent resource recommendation computed by the autoscaler, based on actual resource usage, not taking into account the **Container Resource Policy**. If actual resource usage causes the target to violate the **Container Resource Policy**, this might be different from the bounded recommendation. This field does not affect actual resource assignment. It is used only as a status indication. |
+
+## VerticalPodAutoscalerCondition
+
+|Name |Ojbect |Description |
+|-|-||-|
+|type |VerticalPodAutoscalerConditionType |The type of condition being described. Possible values are `RecommendationProvided`, `LowConfidence`, `NoPodsMatched`, and `FetchingHistory`. |
+|status |ConditionStatus |The status of the condition. Possible values are `True`, `False`, and `Unknown`. |
+|lastTransitionTime |Time |The last time the condition made a transition from one status to another. |
+|reason |string |The reason for the last transition from one status to another. |
+|message |string |A human-readable string that gives details about the last transition from one status to another. |
+
+## Next steps
+
+See [Vertical Pod Autoscaler][vertical-pod-autoscaler] to understand how to improve cluster resource utilization and free up CPU and memory for other pods.
+
+<!-- EXTERNAL LINKS -->
+[object-metadata-ref]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata
+[scale-ref]: https://v1-25.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#scalespec-v1-autoscaling
+
+<!-- INTERNAL LINKS -->
+[vertical-pod-autoscaler]: vertical-pod-autoscaler.md
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
+ Title: Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster. - Previously updated : 03/17/2023+ Last updated : 09/28/2023
-# Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
+# Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
-This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources.
+This article provides an overview of Vertical Pod Autoscaler (VPA) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA frees up CPU and Memory for the other pods and helps make effective utilization of your AKS cluster.
+
+Vertical Pod autoscaling provides recommendations for resource usage over time. To manage sudden increases in resource usage, use the [Horizontal Pod Autoscaler][horizontal-pod-autoscaling], which scales the number of pod replicas as needed.
## Benefits
Vertical Pod Autoscaler provides the following benefits:
* It analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time.
-* A Pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*.
+* A pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*.
* Set CPU and memory constraints for individual containers by specifying a resource policy
Vertical Pod Autoscaler provides the following benefits:
## Limitations
-* Vertical Pod autoscaling supports a maximum of 500 `VerticalPodAutoscaler` objects per cluster.
-* With this preview release, you can't change the `controlledValue` and `updateMode` of `managedCluster` object.
+* Vertical Pod autoscaling supports a maximum of 1,000 pods associated with `VerticalPodAutoscaler` objects per cluster.
+
+* VPA might recommend more resources than available in the cluster. As a result, this prevents the pod from being assigned to a node and run, because the node doesn't have sufficient resources. You can overcome this limitation by setting the *LimitRange* to the maximum available resources per namespace, which ensures pods don't ask for more resources than specified. Additionally, you can set maximum allowed resource recommendations per pod in a `VerticalPodAutoscaler` object. Be aware that VPA cannot fully overcome an insufficient node resource issue. The limit range is fixed, but the node resource usage is changed dynamically.
+
+* We don't recommend using Vertical Pod Autoscaler with [Horizontal Pod Autoscaler][horizontal-pod-autoscaler-overview], which scales based on the same CPU and memory usage metrics.
+
+* VPA Recommender only stores up to eight days of historical data.
+
+* VPA does not support JVM-based workloads due to limited visibility into actual memory usage of the workload.
+
+* It is not recommended or supported to run your own implementation of VPA alongside this managed implementation of VPA. Having an extra or customized recommender is supported.
+
+* AKS Windows containers are not supported.
## Before you begin * AKS cluster is running Kubernetes version 1.24 and higher.
-* The Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The Azure CLI version 2.52.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
* `kubectl` should be connected to the cluster you want to install VPA.
-## API Object
+## VPA overview
-The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported in this preview release is 0.11 can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
+### API object
-## Install the aks-preview Azure CLI extension
+The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported is 0.11 and higher, and can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
+The VPA object consists of three components:
-To install the aks-preview extension, run the following command:
+- **Recommender** - it monitors the current and past resource consumption and, based on it, provides recommended values for the containers' cpu and memory requests/limits. The **Recommender** monitors the metric history, Out of Memory (OOM) events, and the VPA deployment spec, and suggests fair requests. By providing a proper resource request and limits configuration, the limits are raised and lowered.
-```azurecli-interactive
-az extension add --name aks-preview
-```
+- **Updater** - it checks which of the managed pods have correct resources set and, if not, kills them so that they can be recreated by their controllers with the updated requests.
-Run the following command to update to the latest version of the extension released:
+- **VPA Admission controller** - it sets the correct resource requests on new pods (either created or recreated by their controller due to the Updater's activity).
-```azurecli-interactive
-az extension update --name aks-preview
-```
+### VPA admission controller
-## Register the 'AKS-VPAPreview' feature flag
+VPA admission controller is a binary that registers itself as a Mutating Admission Webhook. With each pod created, it gets a request from the apiserver and it evaluates if there's a matching VPA configuration, or find a corresponding one and use the current recommendation to set resource requests in the pod.
-Register the `AKS-VPAPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+A standalone job runs outside of the VPA admission controller, called `overlay-vpa-cert-webhook-check`. The `overlay-vpa-cert-webhook-check` is used to create and renew the certificates, and register the VPA admission controller as a `MutatingWebhookConfiguration`.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-VPAPreview"
-```
+For high availability, AKS supports two admission controller replicas.
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+### VPA object operation modes
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AKS-VPAPreview"
-```
+A Vertical Pod Autoscaler resource is inserted for each controller that you want to have automatically computed resource requirements. This is most commonly a *deployment*. There are four modes in which VPAs operate:
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+* `Auto` - VPA assigns resource requests during pod creation and updates existing pods using the preferred update mechanism. Currently, `Auto` is equivalent to `Recreate`, and also is the default mode. Once restart free ("in-place") update of pod requests is available, it may be used as the preferred update mechanism by the `Auto` mode. When using `Recreate` mode, VPA evicts a pod if it needs to change its resource requests. It may cause the pods to be restarted all at once, thereby causing application inconsistencies. You can limit restarts and maintain consistency in this situation by using a [PodDisruptionBudget][pod-disruption-budget].
+* `Recreate` - VPA assigns resource requests during pod creation as well as update existing pods by evicting them when the requested resources differ significantly from the new recommendation (respecting the Pod Disruption Budget, if defined). This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Otherwise, the `Auto` mode is preferred, which may take advantage of restart-free updates once they are available.
+* `Initial` - VPA only assigns resource requests during pod creation and never changes afterwards.
+* `Off` - VPA doesn't automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+## Deployment pattern during application development
+
+A common deployment pattern recommended for you if you're unfamiliar with VPA is to perform the following steps during application development in order to identify its unique resource utilization characteristics, test VPA to verify it is functioning properly, and test alongside other Kubernetes components to optimize resource utilization of the cluster.
+
+1. Set `updateMode = off` in your production cluster and run VPA in recommendation mode so you can test and gain familiarity with VPA. `UpdateMode = off` can avoid introducing a misconfiguration that can cause an outage.
+
+2. Establish observability first by collecting actual resource utilization telemetry over a given period of time. This helps you understand the behavior and signs of symptoms or issues from container and pod resources influenced by the workloads running on them.
+
+3. Get familiar with the monitoring data to understand the performance characteristics. Based on this insight, set the desired requests/limits accordingly and then in the next deployment or upgrade
+
+4. Set `updateMode` value to `Auto`, `Recreate`, or `Initial` depending on your requirements.
## Deploy, upgrade, or disable VPA on a cluster
vpa-updater-56f9bfc96f-jgq2g 1/1 Running 0 41m
## Test your Vertical Pod Autoscaler installation
-The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also created is a VPA config pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request.
+The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also a VPA config is created, pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request.
1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository.
The following steps create a deployment with two pods, each running a single con
Environment: <none> ```
-## Set Pod Autoscaler requests automatically
+## Set Pod Autoscaler requests
-Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on Pods when the updateMode is set to **Auto** or **Recreate**.
+Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on pods when the updateMode is set to a **Auto**. You can set a different value depending on your requirements and testing. In this example, updateMode is set to `Recreate`.
1. Enable VPA for your cluster by running the following command. Replace cluster name `myAKSCluster` with the name of your AKS cluster and replace `myResourceGroup` with the name of the resource group the cluster is hosted in.
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"] ```
- This manifest describes a deployment that has two Pods. Each Pod has one container that requests 100 milliCPU and 50 MiB of memory.
+ This manifest describes a deployment that has two pods. Each pod has one container that requests 100 milliCPU and 50 MiB of memory.
3. Create the pod with the [kubectl create][kubectl-create] command, as shown in the following example:
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
kind: Deployment name: vpa-auto-deployment updatePolicy:
- updateMode: "Auto"
+ updateMode: "Recreate"
```
- The `targetRef.name` value specifies that any Pod that is controlled by a deployment named `vpa-auto-deployment` belongs to this `VerticalPodAutoscaler`. The `updateMode` value of `Auto` means that the Vertical Pod Autoscaler controller can delete a Pod, adjust the CPU and memory requests, and then start a new Pod.
+ The `targetRef.name` value specifies that any pod that's controlled by a deployment named `vpa-auto-deployment` belongs to `VerticalPodAutoscaler`. The `updateMode` value of `Recreate` means that the Vertical Pod Autoscaler controller can delete a pod, adjust the CPU and memory requests, and then create a new pod.
6. Apply the manifest to the cluster using the [kubectl apply][kubectl-apply] command:
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
kubectl create -f azure-vpa-auto.yaml ```
-7. Wait a few minutes, and view the running Pods again by running the following [kubectl get][kubectl-get] command:
+7. Wait a few minutes, and view the running pods again by running the following [kubectl get][kubectl-get] command:
```bash kubectl get pods
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s ```
-8. Get detailed information about one of your running Pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your Pods that you retrieved in the previous step.
+8. Get detailed information about one of your running pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your pods that you retrieved in the previous step.
```bash kubectl get pod podName --output yaml
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
The results show the `target` attribute specifies that for the container to run optimally, it doesn't need to change the CPU or the memory target. Your results may vary where the target CPU and memory recommendation are higher.
- The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
+ The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a pod and replace it with a new pod. If a pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the pod and replaces it with a pod that meets the target attribute.
+
+## Extra Recommender for Vertical Pod Autoscaler
+
+In the VPA, one of the core components is the Recommender. It provides recommendations for resource usage based on real time resource consumption. AKS deploys a recommender when a cluster enables VPA. You can deploy a customized recommender or an extra recommender with the same image as the default one. The benefit of having a customized recommender is that you can customize your recommendation logic. With an extra recommender, you can partition VPAs to multiple recommenders if there are many VPA objects.
+
+The following example is an extra recommender that you apply to your existing AKS cluster. You then configure the VPA object to use the extra recommender.
+
+1. Create a file named `extra_recommender.yaml` and copy in the following manifest:
+
+ ```json
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: extra-recommender
+ namespace: kube-system
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: extra-recommender
+ template:
+ metadata:
+ labels:
+ app: extra-recommender
+ spec:
+ serviceAccountName: vpa-recommender
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ containers:
+ - name: recommender
+ image: registry.k8s.io/autoscaling/vpa-recommender:0.13.0
+ imagePullPolicy: Always
+ args:
+ - --recommender-name=extra-recommender
+ resources:
+ limits:
+ cpu: 200m
+ memory: 1000Mi
+ requests:
+ cpu: 50m
+ memory: 500Mi
+ ports:
+ - name: prometheus
+ containerPort: 8942
+ ```
+
+2. Deploy the `extra-recomender.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```bash
+ kubectl apply -f extra-recommender.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+3. Create a file named `hamnster_extra_recommender.yaml` and copy in the following manifest:
+
+ ```yml
+ apiVersion: "autoscaling.k8s.io/v1"
+ kind: VerticalPodAutoscaler
+ metadata:
+ name: hamster-vpa
+ spec:
+ recommenders:
+ - name: 'extra-recommender'
+ targetRef:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: hamster
+ updatePolicy:
+ updateMode: "Auto"
+ resourcePolicy:
+ containerPolicies:
+ - containerName: '*'
+ minAllowed:
+ cpu: 100m
+ memory: 50Mi
+ maxAllowed:
+ cpu: 1
+ memory: 500Mi
+ controlledResources: ["cpu", "memory"]
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: hamster
+ spec:
+ selector:
+ matchLabels:
+ app: hamster
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: hamster
+ spec:
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ containers:
+ - name: hamster
+ image: k8s.gcr.io/ubuntu-slim:0.1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 50Mi
+ command: ["/bin/sh"]
+ args:
+ - "-c"
+ - "while true; do timeout 0.5s yes >; sleep 0.5s; done"
+ ```
+
+ If `memory` is not specified in `controlledResources`, the Recommender doesn't respond to OOM events. In this case, you are only setting CPU in `controlledValues`. `controlledValues` allows you to choose whether to update the container's resource requests by `RequestsOnly` option, or both resource requests and limits using the `RequestsAndLimits` option. The default value is `RequestsAndLimits`. If you use the `RequestsAndLimits` option, **requests** are computed based on actual usage, and **limits** are calculated based on the current pod's request and limit ratio.
+
+ For example, if you start with a pod that requests 2 CPUs and limits to 4 CPUs, VPA always sets the limit to be twice as much as requests. The same principle applies to memory. When you use the `RequestsAndLimits` mode, it can serve as a blueprint for your initial application resource requests and limits.
+
+You can simplify VPA object by using Auto mode and computing recommendations for both CPU and Memory.
+
+4. Deploy the `hamster_extra-recomender.yaml` example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```bash
+ kubectl apply -f hamster_customized_recommender.yaml
+ ```
+
+5. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
+
+ ```bash
+ kubectl get --watch pods -l app=hamster
+ ````
+
+6. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations.
+
+ ```bash
+ kubectl describe pod hamster-<exampleID>
+ ```
+
+ The example output is a snippet of the information describing the pod:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+7. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information.
+
+ ```bash
+ kubectl describe vpa/hamster-vpa
+ ```
+
+ The example output is a snippet of the information about the resource utilization:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ Spec:
+ recommenders:
+ Name: customized-recommender
+ ```
+
+## Troubleshooting
+
+To diagnose problems with a VPA installation, perform the following steps.
+
+1. Check if all system components are running using the following command:
+
+ ```bash
+ kubectl --namespace=kube-system get pods|grep vpa
+ ```
+
+The output should list three pods - recommender, updater and admission-controller all with the state showing a status of `Running`.
+
+2. Confirm if the system components log any errors. For each of the pods returned by the previous command, run the following command:
+
+ ```bash
+ kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}'
+ ```
+
+3. Confirm that the custom resource definition was created by running the following command:
+
+ ```bash
+ kubectl get customresourcedefinition | grep verticalpodautoscalers
+ ```
## Next steps
-This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
+This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements.
+
+* You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
+
+* See the Vertical Pod Autoscaler [API reference] to learn more about the definitions for related VPA objects.
<!-- EXTERNAL LINKS --> [kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml
This article showed you how to automatically scale resource utilization, such as
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [github-autoscaler-repo-v011]: https://github.com/kubernetes/autoscaler/blob/vpa-release-0.11/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
+[pod-disruption-budget]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
<!-- INTERNAL LINKS --> [get-started-with-aks]: /azure/architecture/reference-architectures/containers/aks-start-here
This article showed you how to automatically scale resource utilization, such as
[az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show
+[horizontal-pod-autoscaler-overview]: concepts-scale.md#horizontal-pod-autoscaler
aks Windows Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-partner-solutions.md
Storage enables standardized and seamless storage interactions, ensuring high ap
![Logo of NetApp.](./media/windows-aks-partner-solutions/netapp.png)
-Astra provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads.
+[Astra](https://www.netapp.com/cloud-services/astra/) provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads.
Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-netapp-files-smb-volumes-for-azure-kubernetes-services/ba-p/3052900) post to dynamically provision SMB volumes for Windows AKS workloads.
app-service App Gateway With Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/app-gateway-with-service-endpoints.md
na Previously updated : 08/04/2021 Last updated : 09/29/2023 ms.devlang: azurecli # Application Gateway integration
-There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multi-tenant, Internal Load Balancer (ILB) App Service Environment and External App Service Environment. This article will walk through how to configure it with App Service (multi-tenant) using service endpoint to secure traffic. The article will also discuss considerations around using private endpoint and integrating with ILB, and External App Service Environment. Finally the article has considerations on scm/kudu site.
+There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multitenant, Internal Load Balancer (ILB) App Service Environment and External App Service Environment. This article walks through how to configure it with App Service (multitenant) using service endpoint to secure traffic. The article also discusses considerations around using private endpoint and integrating with ILB, and External App Service Environment. Finally the article has considerations on scm/kudu site.
-## Integration with App Service (multi-tenant)
-App Service (multi-tenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we'll use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
+## Integration with App Service (multitenant)
+App Service (multitenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
:::image type="content" source="./media/app-gateway-with-service-endpoints/service-endpoints-appgw.png" alt-text="Diagram shows the Internet flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a firewall icon to instances of apps in App Service.":::
-There are two parts to this configuration besides creating the App Service and the Application Gateway. The first part is enabling service endpoints in the subnet of the Virtual Network where the Application Gateway is deployed. Service endpoints will ensure all network traffic leaving the subnet towards the App Service will be tagged with the specific subnet ID. The second part is to set an access restriction of the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure it using different tools depending on preference.
+There are two parts to this configuration besides creating the App Service and the Application Gateway. The first part is enabling service endpoints in the subnet of the Virtual Network where the Application Gateway is deployed. Service endpoints ensure all network traffic leaving the subnet towards the App Service is tagged with the specific subnet ID. The second part is to set an access restriction of the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure it using different tools depending on preference.
## Using Azure portal
-With Azure portal, you follow four steps to provision and configure the setup. If you have existing resources, you can skip the first steps.
+With Azure portal, you follow four steps to create and configure the setup. If you have existing resources, you can skip the first steps.
1. Create an App Service using one of the Quickstarts in the App Service documentation, for example [.NET Core Quickstart](../quickstart-dotnetcore.md) 2. Create an Application Gateway using the [portal Quickstart](../../application-gateway/quick-create-portal.md), but skip the Add backend targets section. 3. Configure [App Service as a backend in Application Gateway](../../application-gateway/configure-web-app.md), but skip the Restrict access section. 4. Finally create the [access restriction using service endpoints](../../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule).
-You can now access the App Service through Application Gateway, but if you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
+You can now access the App Service through Application Gateway. If you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
:::image type="content" source="./media/app-gateway-with-service-endpoints/website-403-forbidden.png" alt-text="Screenshot shows the text of an Error 403 - Forbidden."::: ## Using Azure Resource Manager template
-The [Resource Manager deployment template][template-app-gateway-app-service-complete] will provision a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you'll have to clone the repo or download the template and edit it.
+The [Resource Manager deployment template][template-app-gateway-app-service-complete] creates a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you have to clone the repo or download the template and edit it.
-To apply the template you can use the Deploy to Azure button found in the description of the template, or you can use appropriate PowerShell/CLI.
+To apply the template, you can use the Deploy to Azure button found in the description of the template, or you can use appropriate PowerShell/CLI.
## Using Azure CLI
-The [Azure CLI sample](../../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) will provision an App Service locked down with service endpoints and access restriction to only receive traffic from Application Gateway. If you only need to isolate traffic to an existing App Service from an existing Application Gateway, the following command is sufficient.
+The [Azure CLI sample](../../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) creates an App Service locked down with service endpoints and access restriction to only receive traffic from Application Gateway. If you only need to isolate traffic to an existing App Service from an existing Application Gateway, the following command is sufficient.
```azurecli-interactive az webapp config access-restriction add --resource-group myRG --name myWebApp --rule-name AppGwSubnet --priority 200 --subnet mySubNetName --vnet-name myVnetName ```
-In the default configuration, the command will ensure both setup of the service endpoint configuration in the subnet and the access restriction in the App Service.
+In the default configuration, the command ensures both setup of the service endpoint configuration in the subnet and the access restriction in the App Service.
## Considerations when using private endpoint
-As an alternative to service endpoint, you can use private endpoint to secure traffic between Application Gateway and App Service (multi-tenant). You will need to ensure that Application Gateway can DNS resolve the private IP of the App Service apps or alternatively that you use the private IP in the backend pool and override the host name in the http settings.
+As an alternative to service endpoint, you can use private endpoint to secure traffic between Application Gateway and App Service (multitenant). You need to ensure that Application Gateway can DNS resolve the private IP of the App Service apps. Alternatively you can use the private IP in the backend pool and override the host name in the http settings.
:::image type="content" source="./media/app-gateway-with-service-endpoints/private-endpoint-appgw.png" alt-text="Diagram shows the traffic flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a private endpoint to instances of apps in App Service.":::
-Application Gateway will cache the DNS lookup results, so if you use FQDNs and rely on DNS lookup to get the private IP address, then you may need to restart the Application Gateway if the DNS update or link to Azure private DNS zone was done after configuring the backend pool. To restart the Application Gateway, you must start and stop the instance. You can do this with Azure CLI:
+Application Gateway caches the DNS lookup results. If you use FQDNs and rely on DNS lookup to get the private IP address, then you may need to restart the Application Gateway if the DNS update or link to Azure private DNS zone was done after configuring the backend pool. To restart the Application Gateway, you must start and stop the instance. You restart the Application Gateway using Azure CLI:
```azurecli-interactive az network application-gateway stop --resource-group myRG --name myAppGw
az network application-gateway start --resource-group myRG --name myAppGw
## Considerations for ILB ASE ILB App Service Environment isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB App Service Environment and integrates it with an Application Gateway using Azure portal.
-If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a Network security group (NSG) which affect all web apps in the App Service Environment. For the NSG, you are able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for App Service Environment to function correctly.
+If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a Network security group (NSG) which affect all web apps in the App Service Environment. For the NSG, you're able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for App Service Environment to function correctly.
-To isolate traffic to an individual web app you'll need to use ip-based access restrictions as service endpoints will not work for ASE. The IP address should be the private IP of the Application Gateway instance.
+To isolate traffic to an individual web app, you need to use IP-based access restrictions as service endpoints doesn't work with App Service Environment. The IP address should be the private IP of the Application Gateway instance.
## Considerations for External ASE
-External App Service Environment has a public facing load balancer like multi-tenant App Service. Service endpoints don't work for App Service Environment, and that's why you'll have to use ip-based access restrictions using the public IP of the Application Gateway instance. To create an External App Service Environment using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
+External App Service Environment has a public facing load balancer like multitenant App Service. Service endpoints don't work for App Service Environment, and that's why you have to use IP-based access restrictions using the public IP of the Application Gateway instance. To create an External App Service Environment using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for complete scenario"
If you want to use the same access restrictions as the main site, you can inheri
az webapp config access-restriction set --resource-group myRG --name myWebApp --use-same-restrictions-for-scm-site ```
-If you want to set individual access restrictions for the scm site, you can add access restrictions using the --scm-site flag like shown below.
+If you want to set individual access restrictions for the scm site, you can add access restrictions using the `--scm-site` flag like shown here.
```azurecli-interactive az webapp config access-restriction add --resource-group myRG --name myWebApp --scm-site --rule-name KudoAccess --priority 200 --ip-address 208.130.0.0/16 ```
+## Considerations when using default domain
+Configuring Application Gateway to override the host name and use the default domain of App Service (typically `azurewebsites.net`) is the easiest way to configure the integration and doesn't require configuring custom domain and certificate in App Service. [This article](/azure/architecture/best-practices/host-name-preservation) discusses the general considerations when overriding the original host name. In App Service, there are two scenarios where you need to pay attention with this configuration.
+
+### Authentication
+When you're using [the authentication feature](../overview-authentication-authorization.md) in App Service (also known as Easy Auth), your app will typically redirect to the sign-in page. Because App Service doesn't know the original host name of the request, the redirect would be done on the default domain name and usually result in an error. To work around default redirect, you can configure authentication to inspect a forwarded header and adapt the redirect domain to the original domain. Application Gateway uses a header called `X-Original-Host`.
+Using [file-based configuration](../configure-authentication-file-based.md) to configure authentication, you can configure App Service to adapt to the original host name. Add this configuration to your configuration file:
+
+```json
+{
+ ...
+ "httpSettings": {
+ "forwardProxy": {
+ "convention": "Custom",
+ "customHostHeaderName": "X-Original-Host"
+ }
+ }
+ ...
+}
+```
+
+### ARR affinity
+In multi-instance deployments, [ARR affinity](../configure-common.md?tabs=portal#configure-general-settings) ensures that client requests are routed to the same instance for the life of the session. ARR affinity doesn't work with host name overrides and you have to configure identical custom domain and certificate in App Service and in Application Gateway and not override host name for session affinity to work.
+ ## Next steps For more information on the App Service Environment, see [App Service Environment documentation](../environment/index.yml). To further secure your web app, information about Web Application Firewall on Application Gateway can be found in the [Azure Web Application Firewall documentation](../../web-application-firewall/ag/ag-overview.md).+
+Tutorial on [deploying a secure, resilient site with a custom domain](https://azure.github.io/AppService/2021/03/26/Secure-resilient-site-with-custom-domain) on App Service using either Azure Front Door or Application Gateway.
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to an App Service apps using Azure private endpoi
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 02/09/2023 Last updated : 09/29/2023
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
* FTP access is provided through the inbound public IP address. Private endpoint doesn't support FTP access to the app. * IP-Based SSL isn't supported with private endpoints. * Apps that you configure with private endpoints cannot use [service endpoint-based access restriction rules](../overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints).
+* Private endpoint naming must follow the rules defined for resources of type `Microsoft.Network/privateEndpoints`. Naming rules can be found [here](../../azure-resource-manager/management/resource-name-rules.md#microsoftnetwork).
We're improving Azure Private Link feature and private endpoint regularly, check [this article](../../private-link/private-endpoint-overview.md#limitations) for up-to-date information about limitations.
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
In your application code, you use the usual logging facilities to send log messa
``` By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](./webjobs-sdk-get-started.md#enable-console-logging)-- Python applications can use the [OpenCensus package](../azure-monitor/app/opencensus-python.md) to send logs to the application diagnostics log.
+- Python applications can use the [OpenCensus package](/previous-versions/azure/azure-monitor/app/opencensus-python) to send logs to the application diagnostics log.
## Stream logs
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
You need to complete the following tasks prior to deploying Application Gateway
helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace <helm-resource-namespace> \ --version 0.5.024542 \
+ --set albController.namespace=<alb-controller-namespace> \
--set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
for i in `seq 1 2`; do
--resource-group myResourceGroupAG \ --name myVM$i \ --nics myNic$i \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys \ --custom-data cloud-init.txt
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
In this example, you create a Virtual Machine Scale Set named *myvmss* that prov
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
In this example, you create a virtual machine scale set that supports the backen
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
In this example, you create a Virtual Machine Scale Set that provides servers fo
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
for i in `seq 1 2`; do
az vmss create \ --name myvmss$i \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
In this example, you create a Virtual Machine Scale Set that provides servers fo
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
for i in `seq 1 3`; do
az vmss create \ --name myvmss$i \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username <azure-user> \ --admin-password <password> \ --instance-count 2 \
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
for i in `seq 1 3`; do
az vmss create \ --name myvmss$i \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 09/26/2023 Last updated : 09/29/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
The currently supported versions of the `microsoft.flux` extension are described
### 1.7.7 (September 2023)
-> [!NOTE]
-> We have started to roll out this release across regions. We'll remove this note once version 1.7.6 is available to all supported regions.
- Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) - source-controller: v1.0.1
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md
Title: Azure Arc-enabled Kubernetes network requirements description: Learn about the networking requirements to connect Kubernetes clusters to Azure Arc. Previously updated : 08/15/2023 Last updated : 09/28/2023
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
This article supports both programming models.
# [Isolated worker model](#tab/isolated-process)
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+This code defines and initializes the `ILogger`:
+This example shows a [C# function](dotnet-isolated-process-guide.md) that receives a message and writes it to a second queue:
+ # [In-process model](#tab/in-process)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
This article supports both programming models.
# [Isolated worker model](#tab/isolated-process)
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+This code defines and initializes the `ILogger`:
+This example shows a [C# function](dotnet-isolated-process-guide.md) that receives a single Service Bus queue message and writes it to the logs:
++
+This example shows a [C# function](dotnet-isolated-process-guide.md) that receives multiple Service Bus queue messages in a single batch and writes each to the logs:
+ # [In-process model](#tab/in-process)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Storage accounts created as part of the function app create flow in the Azure po
+ When creating your function app in the portal, you're only allowed to choose an existing storage account in the same region as the function app you're creating. This is a performance optimization and not a strict limitation. To learn more, see [Storage account location](#storage-account-location). ++ When creating your function app on a plan with [availability zone support](../reliability/reliability-functions.md#availability-zone-support) enabled, only [zone-redundant storage accounts](../storage/common/storage-redundancy.md#zone-redundant-storage) are supported.+ ## Storage account guidance Every function app requires a storage account to operate. When that account is deleted, your function app won't run. To troubleshoot storage-related issues, see [How to troubleshoot storage-related issues](functions-recover-storage-account.md). The following other considerations apply to the Storage account used by function apps.
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
> * Show traffic data > * Add a ground overlay
-If migrating an existing web application, check to see if it's using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it's and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles] \| [satellite tiles]). The following links provide details on how to use Azure Maps in commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library such as Cesium, Leaflet, and OpenLayers. In such case, connect your application to the Azure Maps [Render] services ([road tiles] | [satellite tiles]). The following links provide details on how to use Azure Maps in commonly used open-source map control libraries.
* [Cesium] - A 3D map control for the web. <!--[Cesium code samples] \|--> [Cesium plugin] * [Leaflet] ΓÇô Lightweight 2D map control for the web. [Leaflet code samples] \| [Leaflet plugin]
Azure Maps more [open-source modules for the web SDK] that extend its capabiliti
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of:
-* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control] in the Web SDK documentation. This package also includes TypeScript definitions.
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control]. This package also includes TypeScript definitions.
* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release. > [!TIP]
Loading a map in both SDKs follows the same set of steps;
**Key differences**
-* Bing maps require an account key specified in the script reference of the API or as a map option. Authentication credentials for Azure Maps are specified as options of the map class as either [Shared Key authentication] or [Azure Active Directory].
+* Bing maps require an account key specified in the script reference of the API or as a map option. Authentication credentials for Azure Maps are specified as options of the map class as either [Shared Key authentication] or [Azure AD].
* Bing Maps takes in a callback function in the script reference of the API that is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used. * When using an ID to reference the `div` element that the map is rendered in, Bing Maps uses an HTML selector (`#myMap`), whereas Azure Maps only uses the ID value (`myMap`). * Coordinates in Azure Maps are defined as Position objects that can be specified as a simple number array in the format `[longitude, latitude]`.
Microsoft.Maps.loadModule('Microsoft.Maps.Traffic', function () {
**After: Azure Maps**
-Azure Maps provides several different options for displaying traffic. Traffic incidents, such as road closures and accidents can be displayed as icons on the map. Traffic flow, color coded roads, can be overlaid on the map and the colors can be modified to be based relative to the posted speed limit, relative to the normal expected delay, or absolute delay. Incident data in Azure Maps is updated every minute and flow data every 2 minutes.
+Azure Maps provides several different options for displaying traffic. Traffic incidents, such as road closures and accidents can be displayed as icons on the map. Traffic flow, color coded roads, can be overlaid on the map and the colors can be modified relative to the posted speed limit, relative to the normal expected delay, or absolute delay. Incident data in Azure Maps is updated every minute and flow data every 2 minutes.
```javascript map.setTraffic({
Learn more about migrating from Bing Maps to Azure Maps.
[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions- [atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number- [atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
-[Azure Active Directory]: azure-maps-authentication.md#azure-ad-authentication
+[Azure AD]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Glossary]: glossary.md [Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
Learn more about migrating from Bing Maps to Azure Maps.
[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes [Pushpin clustering]: #pushpin-clustering
+[Render]: /rest/api/maps/render-v2
[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins [road tiles]: /rest/api/maps/render-v2/get-map-tile
-[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[satellite tiles]: /rest/api/maps/render-v2/get-map-static-image
[Setting the map view]: #setting-the-map-view [Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication [Show traffic data]: #show-traffic-data
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
The following table provides the Azure Maps service APIs that provide similar fu
| Bing Maps service API | Azure Maps service API | ||-| | Autosuggest | [Search] |
-| Directions (including truck) | [Route directions] |
-| Distance Matrix | [Route Matrix] |
-| Imagery ΓÇô Static Map | [Render] |
-| Isochrones | [Route Range] |
-| Local Insights | [Search] + [Route Range] |
+| Directions (including truck) | [Get Route Directions] |
+| Distance Matrix | [Post Route Matrix] |
+| Imagery ΓÇô Static Map | [Get Map Static Image] |
+| Isochrones | [Get Route Range] |
+| Local Insights | [Search] + [Get Route Range] |
| Local Search | [Search] | | Location Recognition (POIs) | [Search] | | Locations (forward/reverse geocoding) | [Search] |
-| Snap to Road | [POST Route directions] |
+| Snap to Road | [Post Route Directions] |
| Spatial Data Services (SDS) | [Search] + [Route] + other Azure Services |
-| Time Zone | [Time Zone] |
-| Traffic Incidents | [Traffic Incident Details] |
+| Time Zone | [Timezone] |
+| Traffic Incidents | [Get Traffic Incident Detail] |
The following service APIs aren't currently available in Azure Maps:
Azure Maps also has these REST web
* [Azure Maps Creator] ΓÇô Create a custom private digital twin of buildings and spaces. * [Spatial operations] ΓÇô Offload complex spatial calculations and operations, such as geofencing, to a service.
-* [Map Tiles] ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
-* [Batch routing] ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
+* [Get Map Tile] ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
+* [Post Route Directions Batch] ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
* [Traffic] Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles. * [Geolocation API] ΓÇô Get the location of an IP address. * [Weather services] ΓÇô Gain access to real-time and forecast weather data.
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Geocoding addresses
Geocoding is the process of converting an address (like `"1 Microsoft way, Redmo
Azure Maps provides several methods for geocoding addresses:
-* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address]: Free-form address geocoding is used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Address Structured]: Structured address geocoding is used to specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Post Search Address Batch]: Use batch address geocoding to create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following tables cross-reference the Bing Maps API parameters with the comparable API parameters in Azure Maps for structured and free-form address geocoding.
Reverse geocoding is the process of converting geographic coordinates (like long
Azure Maps provides several reverse geocoding methods:
-* [Address reverse geocoder]: Specify a single geographic coordinate to get its approximate address and process the request immediately.
-* [Cross street reverse geocoder]: Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
-* [Batch address reverse geocoder]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address Reverse]: Specify a single geographic coordinate to get its approximate address and process the request immediately.
+* [Get Search Address Reverse Cross Street]: Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
+* [Post Search Address Reverse Batch]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross references the Bing Maps entity type values to the equ
Several of the Azure Maps search APIΓÇÖs support predictive mode that can be used for autosuggest scenarios. The Azure Maps [fuzzy search] API is the most like the Bing Maps Autosuggest API. The following APIs also support predictive mode, add `&typeahead=true` to the query:
-* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
-* [POI category search]: Search for points of interests by category. For example, "restaurant".
+* [Get Search Address]: A free-form address geocoding used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Get Search POI]: The point of interest (POI) search is used to search for points of interests by name. For example, `"starbucks"`.
+* [Get Search POI Category]: The point of interest (POI) category search is used to search for points of interests by category. For example, "restaurant".
## Calculate routes and directions
Azure Maps can be used to calculate routes and directions. Azure Maps has many o
The Azure Maps routing service provides the following APIs for calculating routes:
-* [Calculate route]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
-* [Batch route]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Route Directions]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
+* [Post Route Directions Batch]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
There are several ways to snap coordinates to roads in Azure Maps.
**Using the route direction API to snap coordinates**
-Azure Maps can snap coordinates to roads by using the [route directions] API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
+Azure Maps can snap coordinates to roads by using the [Get Route Directions] API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
There are two different ways to use the route directions API to snap coordinates to roads.
The Azure Maps vector tiles contain the raw road geometry data that can be used
## Retrieve a map image (Static Map)
-Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Map image render] API is comparable to the static map API in Bing Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Get Map Static Image] API is comparable to the static map API in Bing Maps.
> [!NOTE] > Azure Maps requires the center, all pushpins and path locations to be coordinates in `longitude,latitude` format whereas Bing Maps uses the `latitude,longitude` format. Addresses will need to be geocoded first.
For more information, see [Render custom data on a raster map].
In addition to being able to generate a static map image, the Azure Maps render service also enables direct access to map tiles in raster (PNG) and vector format:
-* [Map tiles] ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
-* [Map imagery tile] ΓÇô Retrieve aerial and satellite imagery tiles.
+* [Get Map Static Image] ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* [Get Map Tile] ΓÇô Retrieve aerial and satellite imagery tiles.
### Pushpin URL parameter format comparison
For example, in Azure Maps, a blue line with 50% opacity and a thickness of four
Azure Maps provides an API for calculating the travel times and distances between a set of locations as a distance matrix. The Azure Maps distance matrix API is comparable to the distance matrix API in Bing Maps:
-* [Route matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
+* [Post Route Matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
> [!NOTE] > A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
The following table cross-references the Bing Maps API parameters with the compa
Point of interest data can be searched in Bing Maps by using the following APIs:
-* **Local search**: Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [POI search] and [POI category search] APIs are most like this API.
+* **Local search**: Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [Get Search POI] and [Get Search POI Category] APIs are most like this API.
* **Location recognition**: Searches for points of interests that are within a certain distance of a location. The Azure Maps [nearby search] API is most like this API. * **Local insights**: Searches for points of interests that are within a specified maximum driving time or distance from a specific coordinate. This is achievable with Azure Maps by first calculating an isochrone and then passing it into the [Search within geometry] API. Azure Maps provides several search APIs for points of interest:
-* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
-* [POI category search]: Search for points of interests by category. For example, "restaurant".
-* [Search within geometry]: Searches for points of interests that are within a certain distance of a location.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Search within geometry]: Search for points of interests that are within a specified geometry (polygon).
-* [Search along route]: Search for points of interests that are along a specified route path.
-* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search POI]: The point of interest (POI) search is used to search for points of interests by name. For example, `"starbucks"`.
+* [Get Search POI Category]: The point of interest (POI) category search is used to search for points of interests by category. For example, "restaurant".
+* [Post Search Inside Geometry]: Searches for points of interests that are within a certain distance of a location or within a specified geometry (polygon).
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Along Route]: Search for points of interests that are along a specified route path.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
For more information on searching in Azure Maps, see [Best practices for Azure Maps Search service].
Bing Maps provides traffic flow and incident data in its interactive map control
Traffic data is also integrated into the Azure Maps interactive map controls. Azure maps also provides the following traffic services APIs:
-* [Traffic flow segments]: Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
-* [Traffic flow tiles]: Provides raster and vector tiles containing traffic flow data. These
+* [Get Traffic Flow Segment]: Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
+* [Get Traffic Flow Tile]: Provides raster and vector tiles containing traffic flow data. These
can be used with the Azure Maps controls or in third-party map controls such as Leaflet. The vector tiles can also be used for advanced data analysis.
-* [Traffic incident details]: Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
-* [Traffic incident tiles]: Provides raster and vector tiles containing traffic incident data.
-* [Traffic incident viewport]: Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
+* [Get Traffic Incident Detail]: Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
+* [Get Traffic Incident Tile]: Provides raster and vector tiles containing traffic incident data.
+* [Get Traffic Incident Viewport]: Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
The following table cross-references the Bing Maps traffic API parameters with the comparable traffic incident details API parameters in Azure Maps.
The following table cross-references the Bing Maps traffic API parameters with t
Azure Maps provides an API for retrieving the time zone a coordinate is in. The Azure Maps time zone API is comparable to the time zone API in Bing Maps.
-* [Time zone by coordinate]: Specify a coordinate and get the details for the time zone it falls in.
+* [Get Timezone By Coordinates]: Specify a coordinate and get the details for the time zone it falls in.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Bing Maps API parameters with the compa
In addition to this the Azure Maps platform also provides many other time zone APIs to help with conversions with time zone names and IDs:
-* [Time zone by ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
-* [Time zone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
-* [Time zone Enum Windows]: Returns a full list of Windows Time Zone IDs.
-* [Time zone IANA version]: Returns the current IANA version number used by Azure Maps.
-* [Time zone Windows to IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* [Get Timezone By ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* [Get Timezone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* [Get Timezone Enum Windows]: Returns a full list of Windows Time Zone IDs.
+* [Get Timezone IANA Version]: Returns the current IANA version number used by Azure Maps.
+* [Get Timezone Windows To IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Spatial Data Services (SDS)
Another option for geocoding a large number addresses with Azure Maps is to make
> > Gen1 pricing tier is now deprecated and will be retired on 9/15/26. Gen2 pricing tier replaces Gen1 (both S0 and S1). If your Azure Maps account has Gen1 pricing tier selected, you can switch to Gen2 pricing tier before itΓÇÖs retired, otherwise it will automatically be updated. For more information on the Gen1 pricing tier retirement, see [Manage the pricing tier of your Azure Maps account].
-* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address]: Free-form address geocoding is used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Address Structured]: Structured address geocoding is used to specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Post Search Address Batch]: Use batch address geocoding to create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
### Get administrative boundary data
To recap:
1. Pass a query for the boundary you want to receive into one of the following search APIs.
- * [Free-form address geocoding]
- * [Structured address geocoding]
- * [Batch address geocoding]
- * [Fuzzy search]
- * [Fuzzy batch search]
+ * [Get Search Address] (Free-form address geocoding)
+ * [Get Search Address Structured] (Structured address geocoding)
+ * [Post Search Address Batch] (Batch address geocoding)
+ * [Post Search Fuzzy Batch] (Fuzzy search)
+ * [Post Search Fuzzy Batch] (Fuzzy batch search)
-1. If the desired result(s) has a geometry ID(s), pass it into the [Search Polygon API].
+1. If the desired result(s) has a geometry ID(s), pass it into the [Get Search Polygon] API.
### Host and query spatial business data
No resources to be cleaned up.
Learn more about the Azure Maps REST services. > [!div class="nextstepaction"]
-> [Best practices for using the search service](how-to-use-best-practices-for-search.md)
+> [Best practices for Azure Maps Search service]
-[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
[Authentication with Azure Maps]: azure-maps-authentication.md [Azure Cosmos DB geospatial capabilities overview]: ../cosmos-db/sql-query-geospatial-intro.md [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
Learn more about the Azure Maps REST services.
[Azure SQL Spatial ΓÇô Query nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor [Azure SQL Spatial Data Types overview]: /sql/relational-databases/spatial/spatial-data-types-overview [Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
-[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
-[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
-[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
-[Batch routing]: /rest/api/maps/route/postroutedirectionsbatchpreview
[Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md [Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
-[Calculate route]: /rest/api/maps/route/getroutedirections
-[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
[free account]: https://azure.microsoft.com/free/
-[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
-[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
-[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[fuzzy search]: /rest/api/maps/search/get-search-fuzzy
[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
+[Get Route Directions]: /rest/api/maps/route/get-route-directions
+[Get Route Range]: /rest/api/maps/route/get-route-range
+[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street
+[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse
+[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured
+[Get Search Address]: /rest/api/maps/search/get-search-address
+[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy
+[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category
+[Get Search POI]: /rest/api/maps/search/get-search-poi
+[Get Search Polygon]: /rest/api/maps/search/get-search-polygon
+[Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates
+[Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id
+[Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana
+[Get Timezone Enum Windows]: /rest/api/maps/timezone/get-timezone-enum-windows
+[Get Timezone IANA Version]: /rest/api/maps/timezone/get-timezone-iana-version
+[Get Timezone Windows To IANA]: /rest/api/maps/timezone/get-timezone-windows-to-iana
+[Get Traffic Flow Segment]: /rest/api/maps/traffic/get-traffic-flow-segment
+[Get Traffic Flow Tile]: /rest/api/maps/traffic/get-traffic-flow-tile
+[Get Traffic Incident Detail]: /rest/api/maps/traffic/get-traffic-incident-detail
+[Get Traffic Incident Tile]: /rest/api/maps/traffic/get-traffic-incident-tile
+[Get Traffic Incident Viewport]: /rest/api/maps/traffic/get-traffic-incident-viewport
[Localization support in Azure Maps]: supported-languages.md
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
-[Map image render]: /rest/api/maps/render/getmapimagerytile
-[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Map Tiles]: /rest/api/maps/render-v2/get-map-tile
[nearby search]: /rest/api/maps/search/getsearchnearby [NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite
-[POI category search]: /rest/api/maps/search/get-search-poi-category
-[POI search]: /rest/api/maps/search/get-search-poi
-[POST Route directions]: /rest/api/maps/route/postroutedirections
+[Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch
+[Post Route Directions]: /rest/api/maps/route/post-route-directions
+[Post Route Matrix]: /rest/api/maps/route/post-route-matrix
+[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch
+[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch
+[Post Search Along Route]: /rest/api/maps/search/post-search-along-route
+[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch
+[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry
[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md [Render custom data on a raster map]: how-to-render-custom-data.md
-[Render]: /rest/api/maps/render-v2/get-map-static-image
-[Route directions]: /rest/api/maps/route/getroutedirections
-[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
-[Route Range]: /rest/api/maps/route/getrouterange
[Route]: /rest/api/maps/route
-[Search along route]: /rest/api/maps/search/postsearchalongroute
[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[Search Polygon API]: /rest/api/maps/search/getsearchpolygon
-[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Search within geometry]: /rest/api/maps/search/post-search-inside-geometry
[Search]: /rest/api/maps/search [Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path [Spatial operations]: /rest/api/maps/spatial
-[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Supported map styles]: supported-map-styles.md
-[Time zone by coordinate]: /rest/api/maps/timezone/gettimezonebycoordinates
-[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
-[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
-[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
-[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
-[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
-[Time Zone]: /rest/api/maps/timezone
-[Traffic flow segments]: /rest/api/maps/traffic/gettrafficflowsegment
-[Traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile
-[Traffic incident details]: /rest/api/maps/traffic/gettrafficincidentdetail
-[Traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile
-[Traffic incident viewport]: /rest/api/maps/traffic/gettrafficincidentviewport
+[Timezone]: /rest/api/maps/timezone
[Traffic]: /rest/api/maps/traffic [turf js]: https://turfjs.org [Weather services]: /rest/api/maps/weather
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Title: 'Tutorial - Migrate a web app from Google Maps to Microsoft Azure Maps'
description: Tutorial on how to migrate a web app from Google Maps to Microsoft Azure Maps Previously updated : 12/07/2020 Last updated : 09/28/2023
Also:
> * Best practices to improve performance and user experience. > * Tips on how to make your application using more advanced features available in Azure Maps.
-If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles]
-\| [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps [Render] services ([road tiles] | [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
* Cesium - A 3D map control for the web. [Cesium documentation]. * Leaflet ΓÇô Lightweight 2D map control for the web. [Leaflet code sample] \| [Leaflet documentation].
If migrating an existing web application, check to see if it's using an open-sou
If developing using a JavaScript framework, one of the following open-source projects may be useful:
-* [ng-azure-maps] - Angular 10 wrapper around Azure maps.
+* [ng-azure-maps] - Angular 10 wrapper around Azure Maps.
* [AzureMapsControl.Components] - An Azure Maps Blazor component. * [Azure Maps React Component] - A react wrapper for the Azure Maps control. * [Vue Azure Maps] - An Azure Maps component for Vue application.
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Key features support
The following are some key differences between the Google Maps and Azure Maps We
* You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order ensures that all the map resources have been loaded and are ready to be accessed. * Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps. * Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.
-* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [*atlas.data* namespace]. There's also the [*atlas.Shape*] class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.
+* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [atlas.data] namespace. There's also the [atlas.Shape] class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.
* Coordinates in Azure Maps are defined as Position objects. A coordinate is specified as a number array in the format `[longitude,latitude]`. Or, it's specified using new atlas.data.Position(longitude, latitude). > [!TIP] > The Position class has a static helper method for importing coordinates that are in "latitude, longitude" format. The [atlas.data.Position.fromLatLng] method can often be replaced with the `new google.maps.LatLng` method in Google Maps code.
Both SDKs have the same steps to load a map:
**Some key differences**
-* Google maps requires an account key to be specified in the script reference of the API. Authentication credentials for Azure Maps are specified as options of the map class. This credential can be a subscription key or Azure Active Directory information.
+* Google Maps requires an account key to be specified in the script reference of the API. Authentication credentials for Azure Maps are specified as options of the map class. This credential can be a subscription key or Azure Active Directory information.
* Google Maps accepts a callback function in the script reference of the API, which is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used. * When referencing the `div` element in which the map renders, the `Map` class in Azure Maps only requires the `id` value while Google Maps requires a `HTMLElement` object. * Coordinates in Azure Maps are defined as Position objects, which can be specified as a simple number array in the format `[longitude, latitude]`.
Both SDKs have the same steps to load a map:
* Azure Maps doesn't add any navigation controls to the map canvas. So, by default, a map doesn't have zoom buttons and map style buttons. But, there are control options for adding a map style picker, zoom buttons, compass or rotation control, and a pitch control. * An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This event fires when the map has finished loading the WebGL context and all the needed resources. Add any code you want to run after the map completes loading, to this event handler.
-The basic examples below uses Google Maps to load a map centered over New York at coordinates. The longitude: -73.985, latitude: 40.747, and the map is at zoom level of 12.
+The following examples use Google Maps to load a map centered over New York at coordinates. The longitude: -73.985, latitude: 40.747, and the map is at zoom level of 12.
#### Before: Google Maps
Running this code in a browser displays a map that looks like the following imag
For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control]. > [!NOTE]
-> Unlike Google Maps, Azure Maps does not require an initial center and a zoom level to load the map. If this information is not provided when loading the map, Azure maps will try to determine city of the user. It will center and zoom the map there.
+> Unlike Google Maps, Azure Maps does not require an initial center and a zoom level to load the map. If this information is not provided when loading the map, Azure Maps will try to determine city of the user. It will center and zoom the map there.
**More resources:**
map.events.add('click', marker, function () {
Google Maps supports loading and dynamically styling GeoJSON data via the `google.maps.Data` class. The functionality of this class aligns more with the data-driven styling of Azure Maps. But, there's a key difference. With Google Maps, you specify a callback function. The business logic for styling each feature it processed individually in the UI thread. But in Azure Maps, layers support specifying data-driven expressions as styling options. These expressions are processed at render time on a separate thread. The Azure Maps approach improves rendering performance. This advantage is noticed when larger data sets need to be rendered quickly.
-The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle is red. If it's greater or equal to three, but less than five, the circle is orange. If it's less than three, the circle is green. The radius of each circle will be the exponential of the magnitude multiplied by 0.1.
+The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle is red. If it's greater or equal to three, but less than five, the circle is orange. If it's less than three, the circle is green. The radius of each circle is the exponential of the magnitude multiplied by 0.1.
#### Before: Google Maps
GeoJSON is the base data type in Azure Maps. Import it into a data source using
### Marker clustering
-When visualizing many data points on the map, points may overlap each other. Overlapping makes the map looks cluttered, and the map becomes difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Cluster data points to improve user experience and map performance.
+When visualizing many data points on the map, points may overlap each other. Overlapping makes the map look cluttered, and the map becomes difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Cluster data points to improve user experience and map performance.
In the following examples, the code loads a GeoJSON feed of earthquake data from the past week and adds it to the map. Clusters are rendered as scaled and colored circles. The scale and color of the circles depends on the number of points they contain.
map.layers.add(new atlas.layer.TileLayer({
### Show traffic data
-Traffic data can be overlaid both Azure and Google maps.
+Traffic data can be overlaid both Azure and Google Maps.
#### Before: Google Maps
If you select one of the traffic icons in Azure Maps, more information is displa
### Add a ground overlay
-Both Azure and Google maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They're great for building floor plans, overlaying old maps, or imagery from a drone.
+Both Azure and Google Maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They're great for building floor plans, overlaying old maps, or imagery from a drone.
#### Before: Google Maps
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
### Add KML data to the map
-Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle larger KML files.
+Both Azure and Google Maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle larger KML files.
#### Before: Google Maps
Learn more about migrating to Azure Maps:
> [!div class="nextstepaction"] > [Migrate a web service]
-[*atlas.data* namespace]: /javascript/api/azure-maps-control/atlas.data
-[*atlas.Shape*]: /javascript/api/azure-maps-control/atlas.shape
+[atlas.data]: /javascript/api/azure-maps-control/atlas.data
+[atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
[`atlas.layer.ImageLayer.getCoordinatesFromEdges`]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number- [Add a Bubble layer]: map-add-bubble-layer.md [Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
Learn more about migrating to Azure Maps:
[Load a map]: #load-a-map [Localization support in Azure Maps]: supported-languages.md [Localizing the map]: #localizing-the-map
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[Marker clustering]: #marker-clustering [Migrate a web service]: migrate-from-google-maps-web-services.md [ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
Learn more about migrating to Azure Maps:
[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions [Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Render]:  /rest/api/maps/render-v2
[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins [road tiles]: /rest/api/maps/render-v2/get-map-tile
-[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[satellite tiles]: /rest/api/maps/render-v2/get-map-static-image
[Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui [Search for points of interest]: map-search-location.md [Setting the map view]: #setting-the-map-view
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Title: 'Tutorial - Migrate web services from Google Maps | Microsoft Azure Maps'
description: Tutorial on how to migrate web services from Google Maps to Microsoft Azure Maps Previously updated : 06/23/2021 Last updated : 09/28/2023
The table shows the Azure Maps service APIs, which have a similar functionality
| Google Maps service API | Azure Maps service API | |-|| | Directions | [Route] |
-| Distance Matrix | [Route Matrix] |
+| Distance Matrix | [Post Route Matrix] |
| Geocoding | [Search] | | Places Search | [Search] | | Place Autocomplete | [Search] | | Snap to Road | See [Calculate routes and directions] section. | | Speed Limits | See [Reverse geocode a coordinate] section. | | Static Map | [Render] |
-| Time Zone | [Time Zone] |
+| Time Zone | [Timezone] |
The following service APIs aren't currently available in Azure Maps: * Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but doesn't currently support cell tower or WiFi triangulation. * Places details and photos - Phone numbers and website URL are available in the Azure Maps search API. * Map URLs
-* Nearest Roads - This is achievable using the Web SDK as demonstrated in the [Basic snap to road logic] sample, but is not currently available as a service.
+* Nearest Roads - Achievable using the Web SDK as demonstrated in the [Basic snap to road logic] sample, but isn't currently available as a service.
* Static street view Azure Maps has several other REST web services that may be of interest:
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Geocoding addresses
Geocoding is the process of converting an address into a coordinate. For example
Azure Maps provides several methods for geocoding addresses:
-* **[Free-form address geocoding]**: Specify a single address string and process the request immediately. "1 Microsoft way, Redmond, WA" is an example of a single address string. This API is recommended if you need to geocode individual addresses quickly.
-* **[Structured address geocoding]**: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* **[Batch address geocoding]**: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.
-* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string. This string can be an address, place, landmark, point of interest, or point of interest category. This API process the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
-* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address]: Free-form address geocoding is used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Address Structured]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Post Search Address Batch]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+ The following table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps. | Google Maps API parameter | Comparable Azure Maps API parameter | ||--|
-| `address` | `query` |
-| `bounds` | `topLeft` and `btmRight` |
-| `components` | `streetNumber`<br/>`streetName`<br/>`crossStreet`<br/>`postalCode`<br/>`municipality` - city / town<br/>`municipalitySubdivision` ΓÇô neighborhood, sub / super city<br/>`countrySubdivision` - state or province<br/>`countrySecondarySubdivision` - county<br/>`countryTertiarySubdivision` - district<br/>`countryCode` - two letter country/region code |
-| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
-| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
-| `region` | `countrySet` |
+| `address` | `query` |
+| `bounds` | `topLeft` and `btmRight` |
+| `components` | `streetNumber`<br/>`streetName`<br/>`crossStreet`<br/>`postalCode`<br/>`municipality` - city / town<br/>`municipalitySubdivision` ΓÇô neighborhood, sub / super city<br/>`countrySubdivision` - state or province<br/>`countrySecondarySubdivision` - county<br/>`countryTertiarySubdivision` - district<br/>`countryCode` - two letter country/region code |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `region` | `countrySet` |
For more information on using the search service, see [Search for a location using Azure Maps Search services]. Be sure to review [best practices for search].
Reverse geocoding is the process of converting geographic coordinates into an ap
Azure Maps provides several reverse geocoding methods:
-* **[Address reverse geocoder]**: Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.
-* **[Cross street reverse geocoder]**: Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets 1st Ave and Main St.
-* **[Batch address reverse geocoder]**: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data is processed in parallel on the server. When the request completes, you can download the full set of results.
+* [Get Search Address Reverse]: Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.
+* [Get Search Address Reverse Cross Street]: Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets: 1st Ave and Main St.
+* [Post Search Address Reverse Batch]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data is processed in parallel on the server. When the request completes, you can download the full set of results.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
Point of interest data can be searched in Google Maps using the Places Search AP
Azure Maps provides several search APIs for points of interest:
-* **[POI search]**: Search for points of interests by name. For example, "Starbucks".
-* **[POI category search]**: Search for points of interests by category. For example, "restaurant".
-* **[Nearby search]**: Searches for points of interests that are within a certain distance of a location.
-* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category. It processes the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
-* **[Search within geometry]**: Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.
-* **[Search along route]**: Search for points of interests that are along a specified route path.
-* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests. Processed the request over a period of time. All data is processed in parallel on the server. When the request completes processing, you can download the full set of result.
+* [Get Search POI]: Search for points of interests by name. For example, "Starbucks".
+* [Get Search POI Category]: Search for points of interests by category. For example, "restaurant".
+* [Get Search Nearby]: Searches for points of interests that are within a certain distance of a location.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Inside Geometry]: Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.
+* [Post Search Along Route]: Search for points of interests that are along a specified route path.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
Currently Azure Maps doesn't have a comparable API to the Text Search API in Google Maps.
For more information, see [best practices for search].
### Find place from text
-Use the Azure Maps [POI search] and [Fuzzy search] to search for points of interests by name or address.
+Use the Azure Maps [Get Search POI] and [Get Search Fuzzy] to search for points of interests by name or address.
The table cross-references the Google Maps API parameters with the comparable Azure Maps API parameters.
The table cross-references the Google Maps API parameters with the comparable Az
### Nearby search
-Use the [Nearby search] API to retrieve nearby points of interests, in Azure Maps.
+Use the [Get Search Nearby] API to retrieve nearby points of interests, in Azure Maps.
The table shows the Google Maps API parameters with the comparable Azure Maps API parameters.
Calculate routes and directions using Azure Maps. Azure Maps has many of the sam
* Arrival and departure times. * Real-time and predictive based traffic routes.
-* Different modes of transportation. Such as, driving, walking, bicycling.
+* Different modes of transportation. Such as driving, walking and bicycling.
> [!NOTE] > Azure Maps requires all waypoints to be coordinates. Addresses must be geocoded first. The Azure Maps routing service provides the following APIs for calculating routes:
-* **[Calculate route]**: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The `POST` Route Direction in Azure Maps has an option can that take in thousands of [supporting points] and use them to recreate a logical route path between them (snap to road).
-* **[Batch route]**: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Route Directions]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The `POST` Route Direction in Azure Maps has an option can that take in thousands of [supporting points] and use them to recreate a logical route path between them (snap to road).
+* [Post Route Directions Batch]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The table cross-references the Google Maps API parameters with the comparable AP
> [!TIP] > By default, the Azure Maps route API only returns a summary. It returns the distance and times and the coordinates for the route path. Use the `instructionsType` parameter to retrieve turn-by-turn instructions. And, use the `routeRepresentation` parameter to filter out the summary and route path.
-Azure Maps routing API has other features that aren't available in Google Maps. When migrating your app, consider using these features, you might find them useful.
+Azure Maps routing API has other features that aren't available in Google Maps. When migrating your app, consider using these features:
* Support for route type: shortest, fastest, trilling, and most fuel efficient.
-* Support for other travel modes: bus, motorcycle, taxi, truck, and van.
+* Support for other travel modes: bus, motorcycle, taxi, truck and van.
* Support for 150 waypoints. * Compute multiple travel times in a single request; historic traffic, live traffic, no traffic. * Avoid other road types: carpool roads, unpaved roads, already used roads. * Specify custom areas to avoid.
-* Limit the elevation, which the route may ascend.
-* Route based on engine specifications. Calculate routes for combustion or electric vehicles based on engine specifications, and the remaining fuel or charge.
-* Support commercial vehicle route parameters. Such as, vehicle dimensions, weight, number of axels, and cargo type.
+* Limit the elevation that the route may ascend.
+* Route based on engine specifications. Calculate routes for combustion or electric vehicles based on engine specifications and the remaining fuel or charge.
+* Support commercial vehicle route parameters. Such as vehicle dimensions, weight, number of axels and cargo type.
* Specify maximum vehicle speed.
-In addition, the route service in Azure Maps supports [calculating routable ranges]. Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
+In addition, the route service in Azure Maps supports [Get Route Range]. Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
For more information, see [best practices for routing]. ## Retrieve a map image
-Azure Maps provides an API for rendering the static map images with data overlaid. The [Map image render] API in Azure Maps is comparable to the static map API in Google Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The [Get Map Static Image] API in Azure Maps is comparable to the static map API in Google Maps.
> [!NOTE] > Azure Maps requires the center, all the marker, and the path locations to be coordinates in "longitude,latitude" format. Whereas, Google Maps uses the "latitude,longitude" format. Addresses will need to be geocoded first.
The table cross-references the Google Maps API parameters with the comparable AP
| Google Maps API parameter | Comparable Azure Maps API parameter | ||--|
-| `center` | `center` |
-| `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. |
-| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
-| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
-| `maptype` | `layer` and `style` ΓÇô See [Supported map styles](supported-map-styles.md) documentation. |
-| `markers` | `pins` |
-| `path` | `path` |
-| `region` | *N/A* ΓÇô This is a geocoding related feature. Use the `countrySet` parameter when using the Azure Maps geocoding API. |
-| `scale` | *N/A* |
-| `size` | `width` and `height` ΓÇô can be up to 8192x8192 in size. |
-| `style` | *N/A* |
-| `visible` | *N/A* |
-| `zoom` | `zoom` |
+| `center` | `center` |
+| `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language`| `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `maptype` | `layer` and `style` ΓÇô For more information, see [Supported map styles]. |
+| `markers` | `pins` |
+| `path` | `path` |
+| `region` | *N/A* ΓÇô A geocoding related feature. Use the `countrySet` parameter when using the Azure Maps geocoding API. |
+| `scale` | *N/A* |
+| `size` | `width` and `height` ΓÇô Max size is 8192 x 8192. |
+| `style` | *N/A* |
+| `visible` | *N/A* |
+| `zoom` | `zoom` |
> [!NOTE] > In the Azure Maps tile system, tiles are twice the size of map tiles used in Google Maps. As such the zoom level value in Azure Maps will appear one zoom level closer in Azure Maps compared to Google Maps. To compensate for this difference, decrement the zoom level in the requests you are migrating. For more information, see [Render custom data on a raster map].
-In addition to being able to generate a static map image, the Azure Maps render service provides the ability to directly access map tiles in raster (PNG) and vector format:
+In addition to being able to generate a static map image, the Azure Maps render service enables direct access of map tiles in raster (PNG) and vector format:
-* **[Map tile]**: Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
-* **[Map imagery tile]**: Retrieve aerial and satellite imagery tiles.
+* [Get Map Static Image]: Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* [Get Map Tile]: Retrieve aerial and satellite imagery tiles.
> [!TIP] > Many Google Maps applications were switched from interactive map experiences to static map images a few years ago. This was done as a cost saving method. In Azure Maps, it is usually more cost effective to use the interactive map control in the Web SDK. The interactive map control charges based the number of tile loads. Map tiles in Azure Maps are large. Often, it takes only a few tiles to recreate the same map view as a static map. Map tiles are cached automatically by the browser. As such, the interactive map control often generates a fraction of a transaction when reproducing a static map view. Panning and zooming will load more tiles; however, there are options in the map control to disable this behavior. The interactive map control also provides a lot more visualization options than the static map services.
Add three pins with the label values '1', '2', and '3':
**Before: Google Maps**
-Add lines and polygon to a static map image using the `path` parameter in the URL. The `path` parameter takes in a style and a list of locations to be rendered on the map, as shown below:
+Add lines and polygon to a static map image using the `path` parameter in the URL. The `path` parameter takes in a style and a list of locations to be rendered on the map:
```text &path=pathStyles|pathLocation1|pathLocation2|...
Use other styles by adding extra `path` parameters to the URL with a different s
Path locations are specified with the `latitude1,longitude1|latitude2,longitude2|…` format. Paths can be encoded or contain addresses for points.
-Add path styles with the `optionName:value` format, separate multiple styles by the pipe (\|) characters. And, separate option names and values with a colon (:). Like this: `optionName1:value1|optionName2:value2`. The following style option names can be used to style paths in Google Maps:
+Add path styles with the `optionName:value` format, separate multiple styles by the pipe (\|) characters. Also separate option names and values with a colon (:). For example: `optionName1:value1|optionName2:value2`. The following style option names can be used to style paths in Google Maps:
* `color` ΓÇô The color of the path or polygon outline. Can be a 24-bit hex color (`0xrrggbb`), a 32-bit hex color (`0xrrggbbbaa`) or one of the following values: black, brown, green, purple, yellow, blue, gray, orange, red, white. * `fillColor` ΓÇô The color to fill the path area with (polygon). Can be a 24-bit hex color (`0xrrggbb`), a 32-bit hex color (`0xrrggbbbaa`) or one of the following values: black, brown, green, purple, yellow, blue, gray, orange, red, white.
Add a red line opacity and pixel thickness between the coordinates, in the URL p
Azure Maps provides the distance matrix API. Use this API to calculate the travel times and the distances between a set of locations, with a distance matrix. It's comparable to the distance matrix API in Google Maps.
-* **[Route matrix]**(/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
+* [Post Route Matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
> [!NOTE] > A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
For more information, see [best practices for routing].
Azure Maps provides an API for retrieving the time zone of a coordinate. The Azure Maps time zone API is comparable to the time zone API in Google Maps:
-* **[Time zone by coordinate]**(/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and receive the time zone details of the coordinate.
+* [Get Timezone By Coordinates]: Specify a coordinate and receive the time zone details of the coordinate.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
This table cross-references the Google Maps API parameters with the comparable A
In addition to this API, Azure Maps provides many time zone APIs. These APIs convert the time based on the names or the IDs of the time zone:
-* **[Time zone by ID]**: Returns current, historical, and future time zone information for the specified IANA time zone ID.
-* **[Time zone Enum IANA]**: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
-* **[Time zone Enum Windows]**: Returns a full list of Windows Time Zone IDs.
-* **[Time zone IANA version]**: Returns the current IANA version number used by Azure Maps.
-* **[Time zone Windows to IANA]**: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* [Get Timezone By ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* [Get Timezone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* [Get Timezone Enum Windows]: Returns a full list of Windows Time Zone IDs.
+* [Get Timezone IANA Version]: Returns the current IANA version number used by Azure Maps.
+* [Get Timezone Windows To IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Client libraries Azure Maps provides client libraries for the following programming languages:
-* JavaScript, TypeScript, Node.js ΓÇô [documentation] \| [npm package]
+* JavaScript, TypeScript, Node.js ΓÇô [Azure Maps services module] \| [npm package]
These Open-source client libraries are for other programming languages:
No resources to be cleaned up.
Learn more about Azure Maps REST > [!div class="nextstepaction"]
-> [Best practices for search](how-to-use-best-practices-for-search.md)
+> [Best practices for search]
-[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
[Authentication with Azure Maps]: azure-maps-authentication.md [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps services module]: how-to-use-services-module.md
[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
-[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
-[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
-[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
[best practices for routing]: how-to-use-best-practices-for-routing.md [best practices for search]: how-to-use-best-practices-for-search.md
-[Calculate route]: /rest/api/maps/route/getroutedirections
[Calculate routes and directions]: #calculate-routes-and-directions
-[calculating routable ranges]: /rest/api/maps/route/getrouterange
-[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
-[documentation]: how-to-use-services-module.md
[free account]: https://azure.microsoft.com/free/
-[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
-[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
-[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
+[Get Route Directions]: /rest/api/maps/route/get-route-directions
+[Get Route Range]: /rest/api/maps/route/get-route-range
+[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street
+[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse
+[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured
+[Get Search Address]: /rest/api/maps/search/get-search-address
+[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy
+[Get Search Nearby]: /rest/api/maps/search/get-search-nearby
+[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category
+[Get Search POI]: /rest/api/maps/search/get-search-poi
+[Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates
+[Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id
+[Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana
+[Get Timezone Enum Windows]: /rest/api/maps/timezone/get-timezone-enum-windows
+[Get Timezone IANA Version]: /rest/api/maps/timezone/get-timezone-iana-version
+[Get Timezone Windows To IANA]: /rest/api/maps/timezone/get-timezone-windows-to-iana
[GitHub project]: https://github.com/perfahlen/AzureMapsRestServices [Localization support in Azure Maps]: supported-languages.md
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
-[Map image render]: /rest/api/maps/render/getmapimagerytile
-[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Map tile]: /rest/api/maps/render-v2/get-map-tile
-[Nearby search]: /rest/api/maps/search/getsearchnearby
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[npm package]: https://www.npmjs.com/package/azure-maps-rest [NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit
-[POI category search]: /rest/api/maps/search/getsearchpoicategory
-[POI search]: /rest/api/maps/search/getsearchpoi
+[Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch
+[Post Route Matrix]: /rest/api/maps/route/post-route-matrix
+[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch
+[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch
+[Post Search Along Route]: /rest/api/maps/search/post-search-along-route
+[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch
+[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry
[Render custom data on a raster map]: how-to-render-custom-data.md [Render]: /rest/api/maps/render-v2/get-map-static-image [Reverse geocode a coordinate]: #reverse-geocode-a-coordinate
-[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
[Route]: /rest/api/maps/route
-[Search along route]: /rest/api/maps/search/postsearchalongroute
[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
[Search]: /rest/api/maps/search [Spatial operations]: /rest/api/maps/spatial
-[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported map styles]: supported-map-styles.md
[supported search categories]: supported-search-categories.md
-[supporting points]: /rest/api/maps/route/postroutedirections#supportingpoints
-[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
-[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
-[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
-[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
-[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
-[Time Zone]: /rest/api/maps/timezone
+[supporting points]: /rest/api/maps/route/post-route-directions#request-body
+[Timezone]: /rest/api/maps/timezone
[Traffic]: /rest/api/maps/traffic
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
A **road** map is a standard map that displays roads. It also displays natural a
**Applicable APIs:**
-* [Map image]
-* [Map tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Static Image]
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## blank and blank_accessible
The **blank** and **blank_accessible** map styles provide a blank canvas for vis
**Applicable APIs:**
-* Web SDK map control
+* [Web SDK map control]
## satellite
The **satellite** style is a combination of satellite and aerial imagery.
**Applicable APIs:**
-* [Satellite tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## satellite_road_labels
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## grayscale_dark
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* [Map image]
-* [Map tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Static Image]
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## grayscale_light
This map style is a hybrid of roads and labels overlaid on top of satellite and
![grayscale light map style](./media/supported-map-styles/grayscale-light.jpg) **Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## night
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## road_shaded_relief
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* [Map tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## high_contrast_dark
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## high_contrast_light
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## Map style accessibility
Learn about how to set a map style in Azure Maps:
> [!div class="nextstepaction"] > [Choose a map style]
-[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
-[Map image]: /rest/api/maps/render-v2/get-map-static-image
-[Map tile]: /rest/api/maps/render-v2/get-map-tile
-[Satellite tile]: /rest/api/maps/render/getmapimagerytilepreview
+[Android map control]: how-to-use-android-map-control-library.md
[Choose a map style]: choose-map-style.md
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
+[Power BI visual]: power-bi-visual-get-started.md
+[Web SDK map control]: how-to-use-map-control.md
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section outlines supported scenarios.
* [ASP.NET](./asp-net.md) * [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
* [ASP.NET Core](./asp-net-core.md) #### Client-side JavaScript SDK
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
A list of the latest [currently supported modules](https://github.com/microsoft/
* [User and page data](./javascript.md) * [Availability](./availability-overview.md) * Set up custom dependency tracking for [Java](opentelemetry-add-modify.md?tabs=java#add-custom-spans).
-* Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md).
+* Set up custom dependency tracking for [OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-dependency).
* [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) * See [data model](./data-model-complete.md) for Application Insights types and data model. * Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Now you can easily filter out in **Transaction Search** all the messages of a pa
The Azure Monitor Log Handler allows you to export Python logs to Azure Monitor.
-Instrument your application with the [OpenCensus Python SDK](./opencensus-python.md) for Azure Monitor.
+Instrument your application with the [OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) for Azure Monitor.
This example shows how to send a warning level log to Azure Monitor.
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
The following SDKs and features are unsupported for use with Azure AD authentica
- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br> Azure AD authentication is only available for Application Insights Java Agent greater than or equal to 3.2.0. - [ApplicationInsights JavaScript web SDK](javascript.md).-- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.
+- [Application Insights OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) with Python version 3.4 and 3.5.
- [Certificate/secret-based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use managed identities instead. - On-by-default codeless monitoring (for languages) for Azure App Service, Azure Virtual Machines/Azure Virtual Machine Scale Sets, and Azure Functions. - [Availability tests](availability-overview.md).
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
To instrument your Node.js application, use the [SDK](./nodejs.md).
### [Python](#tab/python)
-To monitor Python apps, use the [SDK](./opencensus-python.md).
+To monitor Python apps, use the [SDK](/previous-versions/azure/azure-monitor/app/opencensus-python).
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable monitoring for applications hosted on App Service:
* **Manually instrumenting the application through code** by installing the Application Insights SDK.
- This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
+ This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](/previous-versions/azure/azure-monitor/app/opencensus-python), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
If you need to make custom API calls to track events/dependencies not captured by default with autoinstrumentation monitoring, you need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
For information on how to set up an Application Insights SDK for code-based moni
- [Java](./opentelemetry-enable.md?tabs=java) - [JavaScript](./javascript.md) - [Node.js](./nodejs.md)-- [Python](./opencensus-python.md)
+- [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
### Codeless monitoring and Visual Studio resource creation
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and
* [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](../app/nodejs.md) * [JavaScript](./javascript.md#enable-distributed-tracing)
-* [Python](opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in the [Dependency auto-collection documentation](asp-net-dependencies.md#dependency-auto-collection).
The following pages consist of language-by-language guidance to enable and confi
In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open-source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open-source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/).
-For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](opencensus-python.md).
+For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](/previous-versions/azure/azure-monitor/app/opencensus-python).
The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html), [Go](https://godoc.org/go.opencensus.io), and various guides for using OpenCensus.
By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont
If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource. The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span.
When this code runs, the following prints in the console:
Notice that there's a `spanId` present for the log message that's within the span. The `spanId` is the same as that which belongs to the span named `hello`.
-You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](./opencensus-python.md#logs).
+You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](/previous-versions/azure/azure-monitor/app/opencensus-python#logs).
We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach:
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Throttling is a concern because it can lead to missed alerts. The condition to t
In summary, we recommend `GetMetric()` because it does pre-aggregation, it accumulates values from all the `Track()` calls, and sends a summary/aggregate once every minute. The `GetMetric()` method can significantly reduce the cost and performance overhead by sending fewer data points while still collecting all relevant information. > [!NOTE]
-> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#autocollected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics, but the metrics implementation is different.
+> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#autocollected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](/previous-versions/azure/azure-monitor/app/opencensus-python#metrics) to send custom metrics, but the metrics implementation is different.
## Get started with GetMetric
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Let's assume the input log message body is `User account with userId 123456xx fa
} } ```+
+## Frequently asked questions
+
+### Why doesn't the log processor process logs using TelemetryClient.trackTrace()?
+
+TelemetryClient.trackTrace() is part of the Application Insights Classic SDK bridge, and the log processors only work with the new [OpenTelemetry-based instrumentation](opentelemetry-enable.md).
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Some use cases:
Before you learn about telemetry processors, you should understand the terms *span* and *log*.
-A span is a type of telemetry that represent one of:
+A span is a type of telemetry that represents one of:
* An incoming request. * An outgoing dependency (for example, a remote call to another service).
The log processor modifies either the log message body or attributes of a log ba
### Update Log message body
-The `body` section requires the `fromAttributes` setting. The values from these attributes are used to create a new body, concatenated in the order that the configuration specifies. The processor will change the log body only if all of these attributes are present on the log.
+The `body` section requires the `fromAttributes` setting. The values from these attributes are used to create a new body, concatenated in the order that the configuration specifies. The processor changes the log body only if all of these attributes are present on the log.
The `separator` setting is optional. This setting is a string. It's specified to split values. > [!NOTE]
For more information, see [Telemetry processor examples](./java-standalone-telem
Metric filter are used to exclude some metrics in order to help control ingestion cost.
-Metric filters only support `exclude` criteria. Metrics that match its `exclude` criteria will not be exported.
+Metric filters only support `exclude` criteria. Metrics that match its `exclude` criteria won't be exported.
To configure this option, under `exclude`, specify the `matchType` one or more `metricNames`.
To configure this option, under `exclude`, specify the `matchType` one or more `
| `\Process(??APP_WIN32_PROC??)\Private Bytes` | default metrics | Sum of [MemoryMXBean.getHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getHeapMemoryUsage--) and [MemoryMXBean.getNonHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getNonHeapMemoryUsage--). | no | | `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | default metrics | `/proc/[pid]/io` Sum of bytes read and written by the process (diff since last reported). See [proc(5)](https://man7.org/linux/man-pages/man5/proc.5.html). | no | | `\Memory\Available Bytes` | default metrics | See [OperatingSystemMXBean.getFreePhysicalMemorySize()](https://docs.oracle.com/javase/7/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html#getFreePhysicalMemorySize()). | no |+
+## Frequently asked questions
+
+### Why doesn't the log processor process logs using TelemetryClient.trackTrace()?
+
+TelemetryClient.trackTrace() is part of the Application Insights Classic SDK bridge, and the log processors only work with the new [OpenTelemetry-based instrumentation](opentelemetry-enable.md).
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
> [!IMPORTANT] > Currently, you can enable monitoring for your Java apps running on Azure Kubernetes Service (AKS) without instrumenting your code by using the [Java standalone agent](./opentelemetry-enable.md?tabs=java).
-> While the solution to seamlessly enable application monitoring is in process for other languages, use the SDKs to monitor your apps running on AKS. Use [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](./opencensus-python.md).
+> While the solution to seamlessly enable application monitoring is in process for other languages, use the SDKs to monitor your apps running on AKS. Use [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](/previous-versions/azure/azure-monitor/app/opencensus-python).
## Application monitoring without instrumenting the code Currently, only Java lets you enable application monitoring without instrumenting the code. To monitor applications in other languages, use the SDKs.
For the applications in other languages, we currently recommend using the SDKs:
* [ASP.NET](./asp-net.md) * [Node.js](./nodejs.md) * [JavaScript](./javascript.md)
-* [Python](./opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
## Troubleshooting
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
- Title: Dependency Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs
-description: Monitor dependency calls for your Python apps via OpenCensus Python.
- Previously updated : 03/22/2023----
-# Track dependencies with OpenCensus Python
-
-> [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
-
-A dependency is an external component that is called by your application. Dependency data is collected using OpenCensus Python and its various integrations. The data is then sent to Application Insights under Azure Monitor as `dependencies` telemetry.
-
-First, instrument your Python application with latest [OpenCensus Python SDK](./opencensus-python.md).
-
-## In-process dependencies
-
-OpenCensus Python SDK for Azure Monitor allows you to send "in-process" dependency telemetry (information and logic that occurs within your application). In-process dependencies will have the `type` field as `INPROC` in analytics.
-
-```python
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-tracer = Tracer(exporter=AzureExporter(connection_string="InstrumentationKey=<your-ikey-here>"), sampler=ProbabilitySampler(1.0))
-
-with tracer.span(name='foo'): # <-- A dependency telemetry item will be sent for this span "foo"
- print('Hello, World!')
-```
-
-## Dependencies with "requests" integration
-
-Track your outgoing requests with the OpenCensus `requests` integration.
-
-Download and install `opencensus-ext-requests` from [PyPI](https://pypi.org/project/opencensus-ext-requests/) and add it to the trace integrations. Requests sent using the Python [requests](https://pypi.org/project/requests/) library will be tracked.
-
-```python
-import requests
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace import config_integration
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-config_integration.trace_integrations(['requests']) # <-- this line enables the requests integration
-
-tracer = Tracer(exporter=AzureExporter(connection_string="InstrumentationKey=<your-ikey-here>"), sampler=ProbabilitySampler(1.0))
-
-with tracer.span(name='parent'):
- response = requests.get(url='https://www.wikipedia.org/wiki/Rabbit') # <-- this request will be tracked
-```
-
-## Dependencies with "httplib" integration
-
-Track your outgoing requests with OpenCensus `httplib` integration.
-
-Download and install `opencensus-ext-httplib` from [PyPI](https://pypi.org/project/opencensus-ext-httplib/) and add it to the trace integrations. Requests sent using [http.client](https://docs.python.org/3.7/library/http.client.html) for Python3 or [httplib](https://docs.python.org/2/library/httplib.html) for Python2 will be tracked.
-
-```python
-import http.client as httplib
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace import config_integration
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-config_integration.trace_integrations(['httplib'])
-conn = httplib.HTTPConnection("www.python.org")
-
-tracer = Tracer(
- exporter=AzureExporter(),
- sampler=ProbabilitySampler(1.0)
-)
-
-conn.request("GET", "http://www.python.org", "", {})
-response = conn.getresponse()
-conn.close()
-```
-
-## Dependencies with "django" integration
-
-Track your outgoing Django requests with the OpenCensus `django` integration.
-
-> [!NOTE]
-> The only outgoing Django requests that are tracked are calls made to a database. For requests made to the Django application, see [incoming requests](./opencensus-python-request.md#track-django-applications).
-
-Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/) and add the following line to the `MIDDLEWARE` section in the Django `settings.py` file.
-
-```python
-MIDDLEWARE = [
- ...
- 'opencensus.ext.django.middleware.OpencensusMiddleware',
-]
-```
-
-Additional configuration can be provided, read [customizations](https://github.com/census-instrumentation/opencensus-python#customization) for a complete reference.
-
-```python
-OPENCENSUS = {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>"
- )''',
- }
-}
-```
-
-You can find a Django sample application that uses dependencies in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
-
-## Dependencies with "mysql" integration
-
-Track your MYSQL dependencies with the OpenCensus `mysql` integration. This integration supports the [mysql-connector](https://pypi.org/project/mysql-connector-python/) library.
-
-Download and install `opencensus-ext-mysql` from [PyPI](https://pypi.org/project/opencensus-ext-mysql/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['mysql'])
-```
-
-## Dependencies with "pymysql" integration
-
-Track your PyMySQL dependencies with the OpenCensus `pymysql` integration.
-
-Download and install `opencensus-ext-pymysql` from [PyPI](https://pypi.org/project/opencensus-ext-pymysql/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['pymysql'])
-```
-
-## Dependencies with "postgresql" integration
-
-Track your PostgreSQL dependencies with the OpenCensus `postgresql` integration. This integration supports the [psycopg2](https://pypi.org/project/psycopg2/) library.
-
-Download and install `opencensus-ext-postgresql` from [PyPI](https://pypi.org/project/opencensus-ext-postgresql/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['postgresql'])
-```
-
-## Dependencies with "pymongo" integration
-
-Track your MongoDB dependencies with the OpenCensus `pymongo` integration. This integration supports the [pymongo](https://pypi.org/project/pymongo/) library.
-
-Download and install `opencensus-ext-pymongo` from [PyPI](https://pypi.org/project/opencensus-ext-pymongo/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['pymongo'])
-```
-
-### Dependencies with "sqlalchemy" integration
-
-Track your dependencies using SQLAlchemy using OpenCensus `sqlalchemy` integration. This integration tracks the usage of the [sqlalchemy](https://pypi.org/project/SQLAlchemy/) package, regardless of the underlying database.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['sqlalchemy'])
-```
-
-## Next steps
-
-* [Application Map](./app-map.md)
-* [Availability](./availability-overview.md)
-* [Search](./diagnostic-search.md)
-* [Log (Analytics) query](../logs/log-query-overview.md)
-* [Transaction diagnostics](./transaction-diagnostics.md)
-
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
- Title: Incoming request tracking in Application Insights with OpenCensus Python | Microsoft Docs
-description: Monitor request calls for your Python apps via OpenCensus Python.
- Previously updated : 06/23/2023----
-# Track incoming requests with OpenCensus Python
-
-> [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
-
-OpenCensus Python and its integrations collect incoming request data. You can track incoming request data sent to your web applications built on top of the popular web frameworks Django, Flask, and Pyramid. Application Insights receives the data as `requests` telemetry.
-
-First, instrument your Python application with the latest [OpenCensus Python SDK](./opencensus-python.md).
-
-## Track Django applications
-
-1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/). Instrument your application with the `django` middleware. Incoming requests sent to your Django application are tracked.
-
-1. Include `opencensus.ext.django.middleware.OpencensusMiddleware` in your `settings.py` file under `MIDDLEWARE`.
-
- ```python
- MIDDLEWARE = (
- ...
- 'opencensus.ext.django.middleware.OpencensusMiddleware',
- ...
- )
- ```
-
-1. Make sure AzureExporter is configured properly in your `settings.py` under `OPENCENSUS`. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
-
- ```python
- OPENCENSUS = {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>"
- )''',
- 'EXCLUDELIST_PATHS': ['https://example.com'], < These sites will not be traced if a request is sent to it.
- }
- }
- ```
-
-You can find a Django sample application in the [Azure Monitor OpenCensus Python samples repository](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
-
-## Track Flask applications
-
-1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/). Instrument your application with the `flask` middleware. Incoming requests sent to your Flask application are tracked.
-
- ```python
-
- from flask import Flask
- from opencensus.ext.azure.trace_exporter import AzureExporter
- from opencensus.ext.flask.flask_middleware import FlaskMiddleware
- from opencensus.trace.samplers import ProbabilitySampler
-
- app = Flask(__name__)
- middleware = FlaskMiddleware(
- app,
- exporter=AzureExporter(connection_string="InstrumentationKey=<your-ikey-here>"),
- sampler=ProbabilitySampler(rate=1.0),
- )
-
- @app.route('/')
- def hello():
- return 'Hello World!'
-
- if __name__ == '__main__':
- app.run(host='localhost', port=8080, threaded=True)
-
- ```
-
-1. You can also configure your `flask` application through `app.config`. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
-
- ```python
- app.config['OPENCENSUS'] = {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1.0)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>",
- )''',
- 'EXCLUDELIST_PATHS': ['https://example.com'], < These sites will not be traced if a request is sent to it.
- }
- }
- ```
-
- > [!NOTE]
- > To run Flask under uWSGI in a Docker environment, you must first add `lazy-apps = true` to the uWSGI configuration file (uwsgi.ini). For more information, see the [issue description](https://github.com/census-instrumentation/opencensus-python/issues/660).
-
-You can find a Flask sample application that tracks requests in the [Azure Monitor OpenCensus Python samples repository](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/flask_sample).
-
-## Track Pyramid applications
-
-1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-pyramid/). Instrument your application with the `pyramid` tween. Incoming requests sent to your Pyramid application are tracked.
-
- ```python
- def main(global_config, **settings):
- config = Configurator(settings=settings)
-
- config.add_tween('opencensus.ext.pyramid'
- '.pyramid_middleware.OpenCensusTweenFactory')
- ```
-
-1. You can configure your `pyramid` tween directly in the code. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
-
- ```python
- settings = {
- 'OPENCENSUS': {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1.0)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>",
- )''',
- 'EXCLUDELIST_PATHS': ['https://example.com'], < These sites will not be traced if a request is sent to it.
- }
- }
- }
- config = Configurator(settings=settings)
- ```
-
-## Track FastAPI applications
-
-1. The following dependencies are required:
- - [fastapi](https://pypi.org/project/fastapi/)
- - [uvicorn](https://pypi.org/project/uvicorn/)
-
- In a production setting, we recommend that you deploy [uvicorn with gunicorn](https://www.uvicorn.org/deployment/#gunicorn).
-
-2. Download and install `opencensus-ext-fastapi` from [PyPI](https://pypi.org/project/opencensus-ext-fastapi/).
-
- `pip install opencensus-ext-fastapi`
-
-3. Instrument your application with the `fastapi` middleware.
-
- ```python
- from fastapi import FastAPI
- from opencensus.ext.fastapi.fastapi_middleware import FastAPIMiddleware
-
- app = FastAPI(__name__)
- app.add_middleware(FastAPIMiddleware)
-
- @app.get('/')
- def hello():
- return 'Hello World!'
- ```
-
-4. Run your application. Calls made to your FastAPI application should be automatically tracked. Telemetry should be logged directly to Azure Monitor.
-
-## Next steps
-
-* [Application Map](./app-map.md)
-* [Availability](./availability-overview.md)
-* [Search](./diagnostic-search.md)
-* [Log Analytics query](../logs/log-query-overview.md)
-* [Transaction diagnostics](./transaction-diagnostics.md)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
- Title: Monitor Python applications with Azure Monitor | Microsoft Docs
-description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor.
- Previously updated : 08/11/2023----
-# Set up Azure Monitor for your Python application
-
-> [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
-
-Azure Monitor supports distributed tracing, metric collection, and logging of Python applications.
-
-Microsoft's supported solution for tracking and exporting data for your Python applications is through the [OpenCensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
-
-Microsoft doesn't recommend using any other telemetry SDKs for Python as a telemetry solution because they're unsupported.
-
-OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). We continue to recommend OpenCensus while OpenTelemetry gradually matures.
-
-## Prerequisites
-
-You need an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
--
-## Introducing OpenCensus Python SDK
-
-[OpenCensus](https://opencensus.io) is a set of open-source libraries to allow collection of distributed tracing, metrics, and logging telemetry. By using [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you can send this collected telemetry to Application Insights. This article walks you through the process of setting up OpenCensus and Azure Monitor exporters for Python to send your monitoring data to Azure Monitor.
-
-## Instrument with OpenCensus Python SDK with Azure Monitor exporters
-
-Install the OpenCensus Azure Monitor exporters:
-
-```console
-python -m pip install opencensus-ext-azure
-```
-
-The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They're `trace`, `metrics`, and `logs`. For more information on these telemetry types, see the [Data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
-
-## Telemetry type mappings
-
-OpenCensus maps the following exporters to the types of telemetry that you see in Azure Monitor.
-
-| Pillar of observability | Telemetry type in Azure Monitor | Explanation |
-|-||--|
-| Logs | Traces, exceptions, customEvents | Log telemetry, exception telemetry, event telemetry |
-| Metrics | customMetrics, performanceCounters | Custom metrics performance counters |
-| Tracing | Requests dependencies | Incoming requests, outgoing requests |
-
-### Logs
-
-1. First, let's generate some local log data.
-
- ```python
-
- import logging
-
- logger = logging.getLogger(__name__)
-
- def main():
- """Generate random log data."""
- for num in range(5):
- logger.warning(f"Log Entry - {num}")
-
- if __name__ == "__main__":
- main()
- ```
-
-1. A log entry is emitted for each number in the range.
-
- ```output
- Log Entry - 0
- Log Entry - 1
- Log Entry - 2
- Log Entry - 3
- Log Entry - 4
- ```
-
-1. We want to see this log data to Azure Monitor. You can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. You may also pass the connection_string directly into the `AzureLogHandler`, but connection strings shouldn't be added to version control.
-
- ```shell
- APPLICATIONINSIGHTS_CONNECTION_STRING=<appinsights-connection-string>
- ```
-
- We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
-
- ```python
- import logging
- from opencensus.ext.azure.log_exporter import AzureLogHandler
-
- logger = logging.getLogger(__name__)
- logger.addHandler(AzureLogHandler())
-
- # Alternatively manually pass in the connection_string
- # logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
-
- """Generate random log data."""
- for num in range(5):
- logger.warning(f"Log Entry - {num}")
- ```
-
-1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
-
- In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
-
- > [!NOTE]
- > The root logger is configured with the level of `warning`. That means any logs that you send that have less severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [Logging documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel).
-
-1. You can also add custom properties to your log messages in the `extra` keyword argument by using the `custom_dimensions` field. These properties appear as key-value pairs in `customDimensions` in Azure Monitor.
- > [!NOTE]
- > For this feature to work, you need to pass a dictionary to the `custom_dimensions` field. If you pass arguments of any other type, the logger ignores them.
-
- ```python
- import logging
-
- from opencensus.ext.azure.log_exporter import AzureLogHandler
-
- logger = logging.getLogger(__name__)
- logger.addHandler(AzureLogHandler())
- # Alternatively manually pass in the connection_string
- # logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
-
- properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
-
- # Use properties in logging statements
- logger.warning('action', extra=properties)
- ```
-
-> [!NOTE]
-> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. To learn more, see [Statsbeat in Application Insights](./statsbeat.md).
-
-#### Configure logging for Django applications
-
-You can configure logging explicitly in your application code like the preceding for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django site's settings configuration, typically `settings.py`.
-
-For information on how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on how to configure logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/).
-
-```json
-LOGGING = {
- "handlers": {
- "azure": {
- "level": "DEBUG",
- "class": "opencensus.ext.azure.log_exporter.AzureLogHandler",
- "connection_string": "<appinsights-connection-string>",
- },
- "console": {
- "level": "DEBUG",
- "class": "logging.StreamHandler",
- "stream": sys.stdout,
- },
- },
- "loggers": {
- "logger_name": {"handlers": ["azure", "console"]},
- },
-}
-```
-
-Be sure you use the logger with the same name as the one specified in your configuration.
-
-```python
-# views.py
-
-import logging
-from django.shortcuts import request
-
-logger = logging.getLogger("logger_name")
-logger.warning("this will be tracked")
-
-```
-
-#### Send exceptions
-
-OpenCensus Python doesn't automatically track and send `exception` telemetry. It's sent through `AzureLogHandler` by using exceptions through the Python logging library. You can add custom properties like you do with normal logging.
-
-```python
-import logging
-
-from opencensus.ext.azure.log_exporter import AzureLogHandler
-
-logger = logging.getLogger(__name__)
-logger.addHandler(AzureLogHandler())
-# Alternatively, manually pass in the connection_string
-# logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
-
-properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
-
-# Use properties in exception logs
-try:
- result = 1 / 0 # generate a ZeroDivisionError
-except Exception:
- logger.exception('Captured an exception.', extra=properties)
-```
-
-Because you must log exceptions explicitly, it's up to you how to log unhandled exceptions. OpenCensus doesn't place restrictions on how to do this logging, but you must explicitly log exception telemetry.
-
-#### Send events
-
-You can send `customEvent` telemetry in exactly the same way that you send `trace` telemetry, except by using `AzureEventHandler` instead.
-
-```python
-import logging
-from opencensus.ext.azure.log_exporter import AzureEventHandler
-
-logger = logging.getLogger(__name__)
-logger.addHandler(AzureEventHandler())
-# Alternatively manually pass in the connection_string
-# logger.addHandler(AzureEventHandler(connection_string=<appinsights-connection-string>))
-
-logger.setLevel(logging.INFO)
-logger.info('Hello, World!')
-```
-
-#### Sampling
-
-For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
-
-#### Log correlation
-
-For information on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](distributed-tracing-telemetry-correlation.md#log-correlation).
-
-#### Modify telemetry
-
-For information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
-### Metrics
-
-OpenCensus.stats supports four aggregation methods but provides partial support for Azure Monitor:
--- **Count**: The count of the number of measurement points. The value is cumulative, can only increase, and resets to 0 on restart.-- **Sum**: A sum up of the measurement points. The value is cumulative, can only increase, and resets to 0 on restart.-- **LastValue**: Keeps the last recorded value and drops everything else.-- **Distribution**: The Azure exporter doesn't support the histogram distribution of the measurement points.-
-### Count aggregation example
-
-1. First, let's generate some local metric data. We create a metric to track the number of times the user selects the **Enter** key.
-
- ```python
-
- from datetime import datetime
- from opencensus.stats import aggregation as aggregation_module
- from opencensus.stats import measure as measure_module
- from opencensus.stats import stats as stats_module
- from opencensus.stats import view as view_module
- from opencensus.tags import tag_map as tag_map_module
-
- stats = stats_module.stats
- view_manager = stats.view_manager
- stats_recorder = stats.stats_recorder
-
- prompt_measure = measure_module.MeasureInt("prompts",
- "number of prompts",
- "prompts")
- prompt_view = view_module.View("prompt view",
- "number of prompts",
- [],
- prompt_measure,
- aggregation_module.CountAggregation())
- view_manager.register_view(prompt_view)
- mmap = stats_recorder.new_measurement_map()
- tmap = tag_map_module.TagMap()
-
- def main():
- for _ in range(4):
- mmap.measure_int_put(prompt_measure, 1)
- mmap.record(tmap)
- metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow()))
- print(metrics[0].time_series[0].points[0])
-
- if __name__ == "__main__":
- main()
- ```
-
-1. Metrics are created to track many times. With each entry, the value is incremented and the metric information appears in the console. The information includes the current value and the current time stamp when the metric was updated.
-
- ```output
- Point(value=ValueLong(5), timestamp=2019-10-09 20:58:04.930426)
- Point(value=ValueLong(6), timestamp=2019-10-09 20:58:05.170167)
- Point(value=ValueLong(7), timestamp=2019-10-09 20:58:05.438614)
- Point(value=ValueLong(7), timestamp=2019-10-09 20:58:05.834216)
- ```
-
-1. Entering values is helpful for demonstration purposes, but we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
-
- ```python
- from datetime import datetime
- from opencensus.ext.azure import metrics_exporter
- from opencensus.stats import aggregation as aggregation_module
- from opencensus.stats import measure as measure_module
- from opencensus.stats import stats as stats_module
- from opencensus.stats import view as view_module
- from opencensus.tags import tag_map as tag_map_module
-
- stats = stats_module.stats
- view_manager = stats.view_manager
- stats_recorder = stats.stats_recorder
-
- prompt_measure = measure_module.MeasureInt("prompts",
- "number of prompts",
- "prompts")
- prompt_view = view_module.View("prompt view",
- "number of prompts",
- [],
- prompt_measure,
- aggregation_module.CountAggregation())
- view_manager.register_view(prompt_view)
- mmap = stats_recorder.new_measurement_map()
- tmap = tag_map_module.TagMap()
-
- exporter = metrics_exporter.new_metrics_exporter()
- # Alternatively manually pass in the connection_string
- # exporter = metrics_exporter.new_metrics_exporter(connection_string='<appinsights-connection-string>')
-
- view_manager.register_exporter(exporter)
-
- def main():
- for _ in range(10):
- input("Press enter.")
- mmap.measure_int_put(prompt_measure, 1)
- mmap.record(tmap)
- metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow()))
- print(metrics[0].time_series[0].points[0])
-
- if __name__ == "__main__":
- main()
- ```
-
-1. The exporter sends metric data to Azure Monitor at a fixed interval. You must set this value to 60 seconds as Application Insights backend assumes aggregation of metrics points on a 60-second time interval. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The data is cumulative, can only increase, and resets to 0 on restart.
-
- You can find the data under `customMetrics`, but the `customMetrics` properties `valueCount`, `valueSum`, `valueMin`, `valueMax`, and `valueStdDev` aren't effectively used.
-
-### Set custom dimensions in metrics
-
-The OpenCensus Python SDK allows you to add custom dimensions to your metrics telemetry by using `tags`, which are like a dictionary of key-value pairs.
-
-1. Insert the tags that you want to use into the tag map. The tag map acts like a sort of "pool" of all available tags you can use.
-
- ```python
- ...
- tmap = tag_map_module.TagMap()
- tmap.insert("url", "http://example.com")
- ...
- ```
-
-1. For a specific `View`, specify the tags you want to use when you're recording metrics with that view via the tag key.
-
- ```python
- ...
- prompt_view = view_module.View("prompt view",
- "number of prompts",
- ["url"], # <-- A sequence of tag keys used to specify which tag key/value to use from the tag map
- prompt_measure,
- aggregation_module.CountAggregation())
- ...
- ```
-
-1. Be sure to use the tag map when you're recording in the measurement map. The tag keys that are specified in the `View` must be found in the tag map used to record.
-
- ```python
- ...
- mmap = stats_recorder.new_measurement_map()
- mmap.measure_int_put(prompt_measure, 1)
- mmap.record(tmap) # <-- pass the tag map in here
- ...
- ```
-
-1. Under the `customMetrics` table, all metric records emitted by using `prompt_view` have custom dimensions `{"url":"http://example.com"}`.
-
-1. To produce tags with different values by using the same keys, create new tag maps for them.
-
- ```python
- ...
- tmap = tag_map_module.TagMap()
- tmap2 = tag_map_module.TagMap()
- tmap.insert("url", "http://example.com")
- tmap2.insert("url", "https://www.wikipedia.org/wiki/")
- ...
- ```
-
-#### Performance counters
-
-By default, the metrics exporter sends a set of performance counters to Azure Monitor. You can disable this capability by setting the `enable_standard_metrics` flag to `False` in the constructor of the metrics exporter.
-
-```python
-...
-exporter = metrics_exporter.new_metrics_exporter(
- enable_standard_metrics=False,
- )
-...
-```
-
-The following performance counters are currently sent:
--- Available Memory (bytes)-- CPU Processor Time (percentage)-- Incoming Request Rate (per second)-- Incoming Request Average Execution Time (milliseconds)-- Process CPU Usage (percentage)-- Process Private Bytes (bytes)-
-You should be able to see these metrics in `performanceCounters`. For more information, see [Performance counters](./performance-counters.md).
-
-#### Modify telemetry
-
-For information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
-### Tracing
-
-> [!NOTE]
-> In OpenCensus, `tracing` refers to [distributed tracing](./distributed-tracing.md). The `AzureExporter` parameter sends `requests` and `dependency` telemetry to Azure Monitor.
-
-1. First, let's generate some trace data locally. In Python IDLE, or your editor of choice, enter the following code:
-
- ```python
- from opencensus.trace.samplers import ProbabilitySampler
- from opencensus.trace.tracer import Tracer
-
- tracer = Tracer(sampler=ProbabilitySampler(1.0))
-
- def main():
- with tracer.span(name="test") as span:
- for value in range(5):
- print(value)
--
- if __name__ == "__main__":
- main()
- ```
-
-1. With each entry, the value is printed to the shell. The OpenCensus Python module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/).
-
- ```output
- 0
- [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='15ac5123ac1f6847', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:22.805429Z', end_time='2019-06-27T18:21:44.933405Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]
- 1
- [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='2e512f846ba342de', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:44.933405Z', end_time='2019-06-27T18:21:46.156787Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]
- 2
- [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='f3f9f9ee6db4740a', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:46.157732Z', end_time='2019-06-27T18:21:47.269583Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]
- ```
-
-1. Viewing the output is helpful for demonstration purposes, but we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
-
- ```python
- from opencensus.ext.azure.trace_exporter import AzureExporter
- from opencensus.trace.samplers import ProbabilitySampler
- from opencensus.trace.tracer import Tracer
-
- tracer = Tracer(
- exporter=AzureExporter(),
- sampler=ProbabilitySampler(1.0),
- )
- # Alternatively manually pass in the connection_string
- # exporter = AzureExporter(
- # connection_string='<appinsights-connection-string>',
- # ...
- # )
-
- def main():
- with tracer.span(name="test") as span:
- for value in range(5):
- print(value)
-
- if __name__ == "__main__":
- main()
- ```
-
-1. Now when you run the Python script, only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`.
-
- For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md). For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md).
-
-#### Sampling
-
-For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
-
-#### Trace correlation
-
-For more information on telemetry correlation in your trace data, see OpenCensus Python [telemetry correlation](distributed-tracing-telemetry-correlation.md#telemetry-correlation-in-opencensus-python).
-
-#### Modify telemetry
-
-For more information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
-## Configure Azure Monitor exporters
-
-As shown, there are three different Azure Monitor exporters that support OpenCensus. Each one sends different types of telemetry to Azure Monitor. To see what types of telemetry each exporter sends, see the following table.
-
-Each exporter accepts the same arguments for configuration, passed through the constructors. You can see information about each one here:
-
-|Exporter telemetry|Description|
-|:|:|
-`connection_string`| The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.|
-`credential`| Credential class used by Azure Active Directory authentication. See the "Authentication" section that follows.|
-`enable_standard_metrics`| Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.|
-`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`. For metrics, you MUST set it to 60 seconds or else your metric aggregations don't make sense in the metrics explorer.|
-`grace_period`| Used to specify the timeout for shutdown of exporters in seconds. Defaults to `5s`.|
-`instrumentation_key`| The instrumentation key used to connect to your Azure Monitor resource.|
-`logging_sampling_rate`| Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to `1.0`.|
-`max_batch_size`| Specifies the maximum size of telemetry that's exported at once.|
-`proxies`| Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).|
-`storage_path`| A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is `$USER` + `.opencensus` + `.azure` + `python-file-name`.|
-`timeout`| Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to `10s`.|
-
-## Integrate with Azure Functions
-
-To capture custom telemetry in Azure Functions environments, use the OpenCensus Python Azure Functions [extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure/tree/main/extensions/functions#opencensus-python-azure-functions-extension). For more information, see the [Azure Functions Python developer guide](../../azure-functions/functions-reference-python.md#log-custom-telemetry).
-
-## Authentication (preview)
-
-> [!NOTE]
-> The authentication feature is available starting from `opencensus-ext-azure` v1.1b0.
-
-Each of the Azure Monitor exporters supports configuration of securely sending telemetry payloads via OAuth authentication with Azure Active Directory. For more information, see the [Authentication documentation](./azure-ad-authentication.md).
-
-## View your data with queries
-
-You can view the telemetry data that was sent from your application through the **Logs (Analytics)** tab.
-
-![Screenshot of the Overview pane with the Logs (Analytics) tab selected.](./media/opencensus-python/0010-logs-query.png)
-
-In the list under **Active**:
--- For telemetry sent with the Azure Monitor trace exporter, incoming requests appear under `requests`. Outgoing or in-process requests appear under `dependencies`.-- For telemetry sent with the Azure Monitor metrics exporter, sent metrics appear under `customMetrics`.-- For telemetry sent with the Azure Monitor logs exporter, logs appear under `traces`. Exceptions appear under `exceptions`.-
-For more information about how to use queries and logs, see [Logs in Azure Monitor](../logs/data-platform-logs.md).
-
-## Learn more about OpenCensus for Python
-
-* [OpenCensus Python on GitHub](https://github.com/census-instrumentation/opencensus-python)
-* [Customization](https://github.com/census-instrumentation/opencensus-python/blob/master/README.rst#customization)
-* [Azure Monitor exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
-* [OpenCensus integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
-* [Azure Monitor sample applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
-
-## Troubleshooting
--
-## Release Notes
-
-For the latest release notes, see [Python Azure Monitor Exporter](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/CHANGELOG.md)
-
-Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements.
-
-## Next steps
-
-* To enable usage experiences, [enable web or browser user monitoring](javascript.md)
-* [Track incoming requests](./opencensus-python-dependency.md).
-* [Track outgoing requests](./opencensus-python-request.md).
-* Check out the [Application map](./app-map.md).
-* Learn how to do [End-to-end performance monitoring](../app/tutorial-performance.md).
-
-### Alerts
-
-* [Availability overview](./availability-overview.md): Create tests to make sure your site is visible on the web.
-* [Smart diagnostics](../alerts/proactive-diagnostics.md): These tests run automatically, so you don't have to do anything to set them up. They tell you if your app has an unusual rate of failed requests.
-* [Metric alerts](../alerts/alerts-log.md): Set alerts to warn you if a metric crosses a threshold. You can set them on custom metrics that you code into your app.
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
const credential = new ManagedIdentityCredential();
// Create a new AzureMonitorOpenTelemetryOptions object and set the credential property to the credential object. const options: AzureMonitorOpenTelemetryOptions = {
- credential: credential
+ azureMonitorExporterOptions: {
+ credential: credential
+ }
}; // Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Select your enablement approach:
- [ASP.NET](./asp-net.md) - [ASP.NET Core](./asp-net-core.md) - [Node.js](./nodejs.md)
- - [Python](./opencensus-python.md)
+ - [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
- [JavaScript: Web](./javascript.md) - [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md)
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
The newer SDKs ([Application Insights 2.7](https://www.nuget.org/packages/Micros
For the SDKs that don't implement pre-aggregation (that is, older versions of Application Insights SDKs or for browser instrumentation), the Application Insights back end still populates the new metrics by aggregating the events received by the Application Insights event collection endpoint. Although you don't benefit from the reduced volume of data transmitted over the wire, you can still use the pre-aggregated metrics and experience better performance and support of the near real time dimensional alerting with SDKs that don't pre-aggregate metrics during collection.
-The collection endpoint pre-aggregates events before ingestion sampling. For this reason, [ingestion sampling](./sampling.md) will never affect the accuracy of pre-aggregated metrics, regardless of the SDK version you use with your application.
+The collection endpoint pre-aggregates events before ingestion sampling. For this reason, [ingestion sampling](./sampling.md) never affects the accuracy of pre-aggregated metrics, regardless of the SDK version you use with your application.
### SDK supported pre-aggregated metrics table
The collection endpoint pre-aggregates events before ingestion sampling. For thi
| .NET Core and .NET Framework | Supported (V2.13.1+)| Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Supported (V2.7.2+) via [GetMetric](get-metric.md) | | Java | Not supported | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not supported | | Node.js | Supported (V2.0.0+) | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not supported |
-| Python | Not supported | Supported | Partially supported via [OpenCensus.stats](opencensus-python.md#metrics) |
+| Python | Not supported | Supported | Partially supported via [OpenCensus.stats](/previous-versions/azure/azure-monitor/app/opencensus-python#metrics) |
> [!NOTE]
-> The metrics implementation for Python by using OpenCensus.stats is different from GetMetric. For more information, see the [Python documentation on metrics](./opencensus-python.md#metrics).
+> The metrics implementation for Python by using OpenCensus.stats is different from GetMetric. For more information, see the [Python documentation on metrics](/previous-versions/azure/azure-monitor/app/opencensus-python#metrics).
### Codeless supported pre-aggregated metrics table
The collection endpoint pre-aggregates events before ingestion sampling. For thi
## Use pre-aggregation with Application Insights custom metrics
-You can use pre-aggregation with custom metrics. The two main benefits are the ability to configure and alert on a dimension of a custom metric and reducing the volume of data sent from the SDK to the Application Insights collection endpoint.
+You can use pre-aggregation with custom metrics. The two main benefits are:
-There are several [ways of sending custom metrics from the Application Insights SDK](./api-custom-events-metrics.md). If your version of the SDK offers [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric), these methods are the preferred way of sending custom metrics. In this case, pre-aggregation happens inside the SDK. This approach reduces the volume of data stored in Azure and also the volume of data transmitted from the SDK to Application Insights. Otherwise, use the [trackMetric](./api-custom-events-metrics.md#trackmetric) method, which will pre-aggregate metric events during data ingestion.
+- The ability to configure and alert on a dimension of a custom metric
+- Reduce the volume of data sent from the SDK to the Application Insights collection endpoint
+
+There are several [ways of sending custom metrics from the Application Insights SDK](./api-custom-events-metrics.md). If your version of the SDK offers [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric), these methods are the preferred way of sending custom metrics. In this case, pre-aggregation happens inside the SDK. This approach reduces the volume of data stored in Azure and also the volume of data transmitted from the SDK to Application Insights. Otherwise, use the [trackMetric](./api-custom-events-metrics.md#trackmetric) method, which pre-aggregates metric events during data ingestion.
## Custom metrics dimensions and pre-aggregation
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
By default no sampling is enabled in the Java autoinstrumentation and SDK. Curre
### Configuring fixed-rate sampling for OpenCensus Python applications
-Instrument your application with the latest [OpenCensus Azure Monitor exporters](./opencensus-python.md).
+Instrument your application with the latest [OpenCensus Azure Monitor exporters](/previous-versions/azure/azure-monitor/app/opencensus-python).
> [!NOTE] > Fixed-rate sampling is not available for the metrics exporter. This means custom metrics are the only types of telemetry where sampling can NOT be configured. The metrics exporter will send all telemetry that it tracks.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Get started at development time with:
* [ASP.NET Core](./asp-net-core.md) * [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
To enable monitoring for an application, you must decide whether you'll use code
- [.NET console applications](app/console.md) - [Java](app/opentelemetry-enable.md?tabs=java) - [Node.js](app/nodejs.md)-- [Python](app/opencensus-python.md)
+- [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
- [Other platforms](app/app-insights-overview.md#supported-languages) ### Configure availability testing
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
This applies to the scenario where you have already enabled container insights f
>* The configuration change can take a few minutes to complete before it takes effect. All ama-logs pods in the cluster will restart. >* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time.
-## Multi-line logging in Container Insights (preview)
+## Multi-line logging in Container Insights
Azure Monitor container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits.
-Additionally, the feature also adds support for .NET and Go stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table.
+Additionally, the feature also adds support for .NET, Go, Python and Java stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table.
+
+Below are two screenshots which demonstrate Multi-line logging at work for Go exception stack trace:
+
+Multi-line logging disabled scenario:
+
+![Screenshot that shows Multi-line logging disabled.](./media/container-insights-logging-v2/multi-line-disabled-go.png)
+
+Multi-line logging enabled scenario:
+
+[ ![Screenshot that shows Multi-line enabled.](./media/container-insights-logging-v2/multi-line-enabled-go.png) ](./media/container-insights-logging-v2/multi-line-enabled-go.png#lightbox)
+
+Similarly, below screenshots depict Multi-line logging enabled scenarios for Java and Python stack traces:
+
+For Java:
+
+[ ![Screenshot that shows Multi-line enabled for Java](./media/container-insights-logging-v2/multi-line-enabled-java.png) ](./media/container-insights-logging-v2/multi-line-enabled-java.png#lightbox)
+
+For Python:
+
+[ ![Screenshot that shows Multi-line enabled for Python](./media/container-insights-logging-v2/multi-line-enabled-python.png) ](./media/container-insights-logging-v2/multi-line-enabled-python.png#lightbox)
### Pre-requisites
Multi-line logging is a preview feature and can be enabled by setting **enabled*
[log_collection_settings.enable_multiline_logs] # fluent-bit based multiline log collection for go (stacktrace), dotnet (stacktrace) # if enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line)
-enabled = "true"
+ enabled = "true"
``` ## Next steps
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
Your application sends data to a [data collection endpoint (DCE)](../essentials/
You can modify the target table and workspace by modifying the DCR without any change to the API call or source data. > [!NOTE] > To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md).
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
To gain more understanding of your usage and costs, create exports using Cost An
These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter and a few more fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the Cost Analytics experiences in the portal.
+The usage export has both the cost for your usage, and the number of units of usage. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+ For instance, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show 1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention),
To investigate your Application Insights usage more deeply, open the **Metrics**
## View data allocation benefits
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details.
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details as described above.
Open the exported usage spreadsheet and filter the **Instance ID** column to your workspace. (To select all your workspaces in the spreadsheet, filter the **Instance ID** column to **contains /workspaces/**.) Next, filter the **ResourceRate** column to show only rows where this rate is equal to zero. Now you'll see the data allocations from these various sources.
Also, if you move a subscription to the new Azure monitoring pricing model in Ap
- For best practices on how to configure and manage Azure Monitor to minimize your charges, see [Azure Monitor best practices - Cost management](best-practices-cost.md). +
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Agents|[Azure Monitor Agent overview](agents/agents-overview.md)|Log Analytics a
Alerts|[Common alert schema](alerts/alerts-common-schema.md)|Updated alert payload common schema to include custom properties.| Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Clarified use of basic auth in webhook.| Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|We've made it easier to understand where to find iLogger telemetry.|
-Application-Insights|[Set up Azure Monitor for your Python application](app/opencensus-python.md)|Updated telemetry type mappings code sample.|
+Application-Insights|[Set up Azure Monitor for your Python application](/previous-versions/azure/azure-monitor/app/opencensus-python)|Updated telemetry type mappings code sample.|
Application-Insights|[Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](app/javascript-feature-extensions.md)|Code samples updated to use connection strings.| Application-Insights|[Connection strings](app/sdk-connection-string.md)|Code samples updated for .NET 6/7.| Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|Code samples updated for .NET 6/7.|
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
|[Application Insights Overview dashboard](app/overview-dashboard.md)|Added important information clarifying that moving or renaming resources breaks dashboards, with more instructions on how to resolve this scenario.| |[Application Insights override default SDK endpoints](/previous-versions/azure/azure-monitor/app/create-new-resource#override-default-endpoints)|Clarified that endpoint modification isn't recommended and to use connection strings instead.| |[Continuous export of telemetry from Application Insights](/previous-versions/azure/azure-monitor/app/export-telemetry)|Added important information about avoiding duplicates when you save diagnostic logs in a Log Analytics workspace.|
-|[Dependency tracking in Application Insights with OpenCensus Python](app/opencensus-python-dependency.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
-|[Incoming request tracking in Application Insights with OpenCensus Python](app/opencensus-python-request.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
-|[Monitor Python applications with Azure Monitor](app/opencensus-python.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Dependency tracking in Application Insights with OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-dependency)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Incoming request tracking in Application Insights with OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-request)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Monitor Python applications with Azure Monitor](/previous-versions/azure/azure-monitor/app/opencensus-python)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Updated connection string overrides example.| |[Application Insights SDK for ASP.NET Core applications](app/tutorial-asp-net-core.md)|Added a new tutorial with step-by-step instructions on how to use the Application Insights SDK with .NET Core applications.| |[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Updated and clarified the SDK support guidance.|
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 08/09/2023 Last updated : 09/29/2023 # Resource limits for Azure NetApp Files
For volumes 100 TiB or under, if you've allocated at least 5 TiB of quota for a
For volumes 100 TiB or under, you can increase the `maxfiles` limit up to 531,278,150 if your volume quota is at least 25 TiB. >[!IMPORTANT]
-> Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, if you have crossed the 63,753,378 `maxfiles` limit, the volume quota cannot be reduced below its corresponding index of 2 TiB.
+> When files or folders are allocated to an Azure NetApp Files volume, they count against the `maxfiles` limit. If a file or folder is deleted, the internal data structures for `maxfiles` allocation remain the same. For instance, if the files used in a volume increase to 63,753,378 and 100,000 files are deleted, the `maxfiles` allocation will remain at 63,753,378.
+> Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, the `maxfiles` limit for a 2 TiB volume is 63,753,378. If you create more than 63,753,378 files in that volume, the volume quota cannot be reduced below its corresponding index of 2 TiB.
**For [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes):**
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 09/13/2023 Last updated : 09/29/2023
Azure NetApp Files backup is supported for the following regions:
* South India * Southeast Asia * Sweden Central
+* UAE Central
* UAE North * UK South * West Europe
azure-relay Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/diagnostic-logs.md
The new settings take effect in about 10 minutes. The logs are displayed in the
## Schema for hybrid connections events
-Hybrid connections event log JSON strings include the elements listed in the following table:
+Hybrid Connections event log JSON strings include the elements listed in the following table:
| Name | Description | | - | - |
Here's a sample hybrid connections event in JSON format.
} ``` +
+## Schema for VNet/IP Filtering Connection Logs
+Hybrid Connections VNet/IP Filtering Connection Logs include elements listed in the following table:
+
+| Name | Description | Supported in Azure Diagnostics | Supported in AZMSVnetConnectionEvents (Resource specific table)
+| | -- || |
+| `SubscriptionId` | Azure subscription ID | Yes | Yes
+| `NamespaceName` | Namespace name | Yes | Yes
+| `IPAddress` | IP address of a client connecting to the Service Bus service | Yes | Yes
+| `AddressIP` | IP address of client connecting to service bus | Yes | Yes
+| `TimeGenerated [UTC]`|Time of executed operation (in UTC) | Yes | Yes
+| `Action` | Action done by the Service Bus service when evaluating connection requests. Supported actions are **Accept Connection** and **Deny Connection**. | Yes | Yes
+| `Reason` | Provides a reason why the action was done | Yes | Yes
+| `Count` | Number of occurrences for the given action | Yes | Yes
+| `ResourceId` | Azure Resource Manager resource ID. | Yes | Yes
+| `Category` | Log Category | Yes | No
+| `Provider`|Name of Service emitting the logs e.g., ServiceBus | No | Yes
+| `Type` | Type of Logs Emitted | No | Yes
+
+> [!NOTE]
+> Virtual network logs are generated only if the namespace allows access from selected networks or from specific IP addresses (IP filter rules).
+
+## Sample VNet and IP Filtering Logs
+Here's an example of a virtual network log JSON string:
+
+AzureDiagnostics:
+```json
+{
+ "SubscriptionId": "0000000-0000-0000-0000-000000000000",
+ "NamespaceName": "namespace-name",
+ "IPAddress": "1.2.3.4",
+ "Action": "Accept Connection",
+ "Reason": "IP is accepted by IPAddress filter.",
+ "Count": 1,
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.RELAY/NAMESPACES/<RELAY NAMESPACE NAME>",
+ "Category": "VNetAndIPFilteringLogs"
+}
+```
+Resource specific table entry:
+```json
+{
+ "SubscriptionId": "0000000-0000-0000-0000-000000000000",
+ "NamespaceName": "namespace-name",
+ "AddressIp": "1.2.3.4",
+ "Action": "Accept Connection",
+ "Message": "IP is accepted by IPAddress filter.",
+ "Count": 1,
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.RELAY/NAMESPACES/<RELAY NAMESPACE NAME>",
+ "Provider" : "RELAY",
+ "Type": "AZMSVNetConnectionEvents"
+}
+```
++ ## Events and operations captured in diagnostic logs | Operation | Description |
azure-video-indexer Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/object-detection.md
+
+ Title: Azure AI Video Indexer object detection overview
+description: An introduction to Azure AI Video Indexer object detection overview.
+ Last updated : 09/26/2023+++++
+# Azure Video Indexer object detection
+
+Azure Video Indexer can detect objects in videos. The insight is part of all standard and advanced presets.
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## JSON keys and definitions
+
+| **Key** | **Definition** |
+| | |
+| ID | Incremental number of IDs of the detected objects in the media file |
+| Type | Type of objects, for example, Car
+| ThumbnailID | GUID representing a single detection of the object |
+| displayName | Name to be displayed in the VI portal experience |
+| WikiDataID | A unique identifier in the WikiData structure |
+| Instances | List of all instances that were tracked
+| Confidence | A score between 0-1 indicating the object detection confidence |
+| adjustedStart | adjusted start time of the video when using the editor |
+| adjustedEnd | adjusted end time of the video when using the editor |
+| start | the time that the object appears in the frame |
+| end | the time that the object no longer appears in the frame |
+
+## JSON response
+
+Object detection is included in the insights that are the result of an [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request.
+
+### Detected and tracked objects
+
+Detected and tracked objects appear under ΓÇ£detected ObjectsΓÇ¥ in the downloaded *insights.json* file. Every time a unique object is detected, it's given an ID. That object is also tracked, meaning that the model watches for the detected object to return to the frame. If it does, another instance is added to the instances for the object with different start and end times.
+
+In this example, the first car was detected and given an ID of 1 since it was also the first object detected. Then, a different car was detected and that car was given the ID of 23 since it was the 23rd object detected. Later, the first car appeared again and another instance was added to the JSON. Here is the resulting JSON:
+
+```json
+detectedObjects: [
+ {
+ id: 1,
+ type: "Car",
+ thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t33",
+ displayName: "car",
+ wikiDataId: "Q1420",
+ instances: [
+ {
+ confidence: 0.468,
+ adjustedStart: "0:00:00",
+ adjustedEnd: "0:00:02.44",
+ start: "0:00:00",
+ end: "0:00:02.44"
+ },
+ {
+ confidence: 0.53,
+ adjustedStart: "0:03:00",
+ adjustedEnd: "0:00:03.55",
+ start: "0:03:00",
+ end: "0:00:03.55"
+ }
+ ]
+ },
+ {
+ id: 23,
+ type: "Car",
+ thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t34",
+ displayName: "car",
+ wikiDataId: "Q1420",
+ instances: [
+ {
+ confidence: 0.427,
+ adjustedStart: "0:00:00",
+ adjustedEnd: "0:00:14.24",
+ start: "0:00:00",
+ end: "0:00:14.24"
+ }
+ ]
+ }
+]
+```
+
+## Try object detection
+
+You can try out object detection with the web portal or with the API.
+
+## [Web Portal](#tab/webportal)
+
+Once you have uploaded a video, you can view the insights. On the insights tab, you can view the list of objects detected and their main instances.
+
+### Insights
+Select the **Insights** tab. The objects are in descending order of the number of appearances in the video.
++
+### Timeline
+Select the **Timeline** tab.
++
+Under the timeline tab, all object detection is displayed according to the time of appearance. When you hover over a specific detection, it shows the detection percentage of certainty.
+
+### Player
+
+The player automatically marks the detected object with a bounding box. The selected object from the insights pane is highlighted in blue with the objects type and serial number also displayed.
+
+Filter the bounding boxes around objects by selecting bounding box icon on the player.
++
+Then, select or deselect the detected objects checkboxes.
++
+Download the insights by selecting **Download** and then **Insights (JSON)**.
+
+## [API](#tab/api)
+
+When you use the [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request with the standard or advanced video presets, object detection is included in the indexing.
+
+To examine object detection more thoroughly, use [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
+++
+## Supported objects
+
+ :::column:::
+ - airplane
+ - apple
+ - backpack
+ - banana
+ - baseball bat
+ - baseball glove
+ - bed
+ - bicycle
+ - bottle
+ - bowl
+ - broccoli
+ - bus
+ - cake
+ :::column-end:::
+ :::column:::
+ - car
+ - carrot
+ - cell phone
+ - chair
+ - clock
+ - computer mouse
+ - couch
+ - cup
+ - dining table
+ - donut
+ - fire hydrant
+ - fork
+ - frisbee
+ :::column-end:::
+ :::column:::
+ - handbag
+ - hot dog
+ - kite
+ - knife
+ - laptop
+ - microwave
+ - motorcycle
+ - necktie
+ - orange
+ - oven
+ - parking meter
+ - pizza
+ - potted plant
+ :::column-end:::
+ :::column:::
+ - refrigerator
+ - remote
+ - sandwich
+ - scissors
+ - skateboard
+ - skis
+ - snowboard
+ - spoon
+ - sports ball
+ - suitcase
+ - surfboard
+ - teddy bear
+ - television
+ :::column-end:::
+ :::column:::
+ - tennis racket
+ - toaster
+ - toilet
+ - toothbrush
+ - traffic light
+ - train
+ - umbrella
+ - vase
+ - wine glass
+ :::column-end:::
+
+## Limitations
+
+- Up to 20 detections per frame for standard and advanced processing and 35 tracks per class.
+- The video area shouldn't exceed 1920 x 1080 pixels.
+- Object size shouldn't be greater than 90 percent of the frame.
+- A high frame rate (> 30 FPS) may result in slower indexing, with little added value to the quality of the detection and tracking.
+- Other factors that may affect the accuracy of the object detection include low light conditions, camera motion, and occlusion.
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-delete-vault.md
To delete a vault, follow these steps:
Alternately, go to the blades manually by following the steps below. -- <a id="portal-mua">**Step 2:**</a> If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- <a id="portal-mua">**Step 2:**</a> If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- <a id="portal-disable-soft-delete">**Step 3:**</a> Disable the soft delete and Security features
If you're sure that all the items backed up in the vault are no longer required
Follow these steps: -- **Step 1:** Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- **Step 1:** Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- <a id="powershell-install-az-module">**Step 2:**</a> Upgrade to PowerShell 7 version by performing these steps:
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
Title: Overview of enhanced soft delete for Azure Backup (preview)
+ Title: Overview of enhanced soft delete for Azure Backup
description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 07/27/2023 Last updated : 09/11/2023
-# About Enhanced soft delete for Azure Backup (preview)
+# About enhanced soft delete for Azure Backup
[Soft delete](backup-azure-security-feature-cloud.md) for Azure Backup enables you to recover your backup data even after it's deleted. This is useful when:
*Basic soft delete* is available for Recovery Services vaults for a while; *enhanced soft delete* now provides additional data protection capabilities.
+>[!Note]
+>Once you enable enhanced soft delete by enabling soft delete state to *always-on*, you can't disable it for that vault.
+ ## What's soft delete? [Soft delete](backup-azure-security-feature-cloud.md) primarily delays permanent deletion of backup data and gives you an opportunity to recover data after deletion. This deleted data is retained for a specified duration (*14*-*180* days) called soft delete retention period.
The key benefits of enhanced soft delete are:
- **Soft delete across workloads**: Enhanced soft delete applies to all vaulted datasources alike and is supported for Recovery Services vaults and Backup vaults. Enhanced soft delete also applies to operational backups of disks and VM backup snapshots used for instant restores. However, unlike vaulted backups, these snapshots can be directly accessed and deleted before the soft delete period expires. Enhanced soft delete is currently not supported for operational backup for Blobs and Azure Files. - **Soft delete of recovery points**: This feature allows you to recover data from recovery points that might have been deleted due to making changes in a backup policy or changing the backup policy associated with a backup item. Soft delete of recovery points isn't supported for log recovery points in SQL and SAP HANA workloads. [Learn more](manage-recovery-points.md#impact-of-expired-recovery-points-for-items-in-soft-deleted-state).
-## Supported regions
--- Enhanced soft delete is available in all Azure public regions.-- Soft delete of recovery points is now available in all Azure public regions.- ## Supported scenarios - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.-- All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete. - Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups. ## States of soft delete settings
You can also use multi-user authorization (MUA) to add an additional layer of pr
## Next steps
-[Configure and manage enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-configure-manage.md).
+[Configure and manage enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-configure-manage.md).
backup Backup Azure Enhanced Soft Delete Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-configure-manage.md
Title: Configure and manage enhanced soft delete for Azure Backup (preview)
+ Title: Configure and manage enhanced soft delete for Azure Backup
description: This article describes about how to configure and manage enhanced soft delete for Azure Backup. Previously updated : 06/12/2023 Last updated : 09/11/2023
-# Configure and manage enhanced soft delete in Azure Backup (preview)
+# Configure and manage enhanced soft delete in Azure Backup
This article describes how to configure and use enhanced soft delete to protect your data and recover backups, if they're deleted.
+>[!Note]
+>Once you enable enhanced soft delete by enabling soft delete state to *always-on*, you can't disable it for that vault.
+ ## Before you start - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.-- It's supported for new and existing vaults.-- All existing Recovery Services vaults in the [preview regions](backup-azure-enhanced-soft-delete-about.md#supported-scenarios) are upgraded with an option to use enhanced soft delete.-- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- Enhanced soft delete applies to all vaulted workloads alike in Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, and Disk and VM snapshot backups.
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable the *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- There's no retention cost for the default soft delete duration of 14 days for vaulted backup, after which it incurs regular backup cost.
## Enable soft delete with always-on state
Here are some points to note:
## Delete recovery points
-Soft delete of recovery points helps you recover any recovery points that are accidentally or maliciously deleted for some operations that could lead to deletion of one or more recovery points. Recovery points don't move to soft-deleted state immediately and have a *24 hour SLA* (same as before). The example here shows recovery points that were deleted as part of backup policy modifications.
-
-[Soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points), a part of enhanced soft delete is currently available in selected Azure regions. [Learn more](backup-azure-enhanced-soft-delete-about.md#supported-regions) on the region availability.
+[Soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points) is a part of enhanced soft delete that helps you recover any recovery points that are accidentally or maliciously deleted for some operations that could lead to deletion of one or more recovery points. Recovery points don't move to soft-deleted state immediately and have a *24 hour SLA* (same as before). The example here shows recovery points that were deleted as part of backup policy modifications.
Follow these steps:
Follow these steps:
The impacted recovery points are labeled as *being soft deleted* in the **Recovery type** column and will be retained as per the soft delete retention of the vault.
- :::image type="content" source="./media/backup-azure-enhanced-soft-delete/select-restore-point-for-soft-delete.png" alt-text="Screenshot shows to filter recovery points for soft delete.":::
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/select-restore-point-for-soft-delete.png" alt-text="Screenshot shows how to filter recovery points for soft delete.":::
## Undelete recovery points
-You can *undelete* recovery points that are in soft deleted state so that they can last till their expiry by modifying the policy again to increase the retention of backups.
+You can *undelete* recovery points that are in soft deleted state so that they can last until their expiry by modifying the policy again to increase the retention of backups.
Follow these steps:
Follow these steps:
## Next steps
-[About Enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-about.md).
+[About enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
backup Backup Azure Enhanced Soft Delete Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-tutorial.md
+
+ Title: Tutorial - Recover soft deleted data and recovery points using enhanced soft delete in Azure Backup
+description: Learn how to enable enhanced soft delete and recover your data and recover backups, if they're deleted.
+ Last updated : 09/11/2023+++++
+# Tutorial: Recover soft deleted data and recovery points using enhanced soft delete in Azure Backup
+
+This tutorial describes how to enable enhanced soft delete and recover your data and recover backups, if they're deleted.
+
+[Enhanced soft delete](backup-azure-enhanced-soft-delete-about.md) provides an improvement to the [soft delete](backup-azure-security-feature-cloud.md) capability in Azure Backup that enables you to recover your backup data in case of accidental or malicious deletion. With enhanced soft delete, you get the ability to make soft delete always-on, thus protecting it from being disabled by any malicious actors. So, enhanced soft delete provides better protection for your backups against various threats. This feature also allows you to provide a customizable soft delete retention period for which soft deleted data must be retained.
+
+>[!Note]
+>Once you enable the *always-on* state for soft delete, you can't disable it for that vault.
+
+## Before you start
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.
+- Enhanced soft delete applies to all vaulted workloads alike in Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, and Disk and VM snapshot backups.
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable the *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- There's no retention cost for the default soft delete duration of 14 days for vaulted backup, after which it incurs regular backup cost.
+
+## Enable soft delete with always-on state
+
+Soft delete is enabled by default for all new vaults you create. To make enabled settings irreversible, select **Enable Always-on Soft Delete**.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to **Recovery Services vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-inline.png" alt-text="Screenshot showing you how to open Soft Delete blade." lightbox="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-expanded.png":::
+
+ The soft delete settings for cloud and hybrid workloads are already enabled, unless you've explicitly disabled them earlier.
+
+1. If soft delete settings are disabled for any workload type in the **Soft Delete** blade, select the respective checkboxes to enable them.
+
+ >[!Note]
+ >Enabling soft delete for hybrid workloads also enables other security settings, such as Multi-factor authentication and alert notification for back up of workloads running in the on-premises servers.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >- There is no cost for soft delete for *14* days. However, deleted instances in soft delete state are charged if the soft delete retention period is *>14* days. Learn about [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+ >- Once configured, the soft delete retention period applies to all soft deleted instances of cloud and hybrid workloads in the vault.
+
+1. Select the **Enable Always-on Soft delete** checkbox to enable soft delete and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete.png" alt-text="Screenshot showing you how to enable a;ways-on state of soft delete.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to **Backup vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties.png" alt-text="Screenshot showing you how to open soft delete blade for Backup vault.":::
+
+ Soft delete is enabled by default with the checkboxes selected.
+
+1. If you've explicitly disabled soft delete for any workload type in the **Soft Delete** blade earlier, select the checkboxes to enable them.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >There is no cost for enabling soft delete for *14* days. However, you're charged for the soft delete instances if soft delete retention period is *>14* days. Learn about the [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+
+1. Select the **Enable Always-on Soft Delete** checkbox to enable soft delete always-on and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete-backup-vault.png" alt-text="Screenshot showing you how to enable always-on state for Backup vault.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+++
+## Delete a backup item
+
+You can delete backup items/instances even if the soft delete settings are enabled. However, if the soft delete is enabled, the deleted items don't get permanently deleted immediately and stays in soft deleted state as per [configured retention period](#enable-soft-delete-with-always-on-state). Soft delete delays permanent deletion of backup data by retaining deleted data for *14*-*180* days.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to delete.
+1. Select **Stop backup**.
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+1. Provide the applicable information, and then select **Stop backup** to delete all backups for the instance.
+
+ Once the *delete* operation completes, the backup item is moved to soft deleted state. In **Backup items**, the soft deleted item is marked in *Red*, and the last backup status shows that backups are disabled for the item.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-inline.png" alt-text="Screenshot showing the soft deleted backup items marked red." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-expanded.png":::
+
+ In the item details, the soft deleted item shows no recovery point. Also, a notification appears to mention the state of the item, and the number of days left before the item is permanently deleted. You can select **Undelete** to recover the soft deleted items.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-inline.png" alt-text="Screenshot showing the soft deleted backup item that shows no recovery point." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-expanded.png":::
+
+>[!Note]
+>When the item is in soft deleted state, no recovery points are cleaned on their expiry as per the backup policy.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. In the **Backup center**, go to the *backup instance* that you want to delete.
+
+1. Select **Stop backup**.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-inline.png" alt-text="Screenshot showing how to initiate the stop backup process for backup items in Backup vault." lightbox="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-expanded.png":::
+
+ You can also select **Delete** in the instance view to delete backups.
+
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+
+1. Provide the applicable information, and then select **Stop backup** to initiate the deletion of the backup instance.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-stop-backup-process.png" alt-text="Screenshot showing how to stop the backup process.":::
+
+ Once deletion completes, the instance appears as *Soft deleted*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-inline.png" alt-text="Screenshot showing the deleted backup items marked as Soft Deleted." lightbox="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-expanded.png":::
+++
+## Recover a soft-deleted backup item
+
+If a backup item/ instance is soft deleted, you can recover it before it's permanently deleted.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to retrieve from the *soft deleted* state.
+
+ You can also use the **Backup center** to go to the item by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted item*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-inline.png" alt-text="Screenshot showing how to start recovering backup items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-expanded.png":::
+
+1. In the **Undelete** *backup item* blade, select **Undelete** to recover the deleted item.
+
+ All recovery points now appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this item, select **Resume backup**.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to the *deleted backup instance* that you want to recover.
+
+ You can also use the **Backup center** to go to the *instance* by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted instance*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-inline.png" alt-text="Screenshot showing how to start recovering deleted backup vault items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-expanded.png":::
+
+1. In the **Undelete** *backup instance* blade, select **Undelete** to recover the item.
+
+ All recovery points appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this instance, select **Resume backup**.
+
+>[!Note]
+>Undeleting a soft deleted item reinstates the backup item into Stop backup with retain data state and doesn't automatically restart scheduled backups. You need to explicitly [resume backups](backup-azure-manage-vms.md#resume-protection-of-a-vm) if you want to continue taking new backups. Resuming backup will also clean up expired recovery points, if any.
++++
+>- MUA for soft delete is currently supported for Recovery Services vaults only.
+
+## Next steps
+
+- Learn more about [enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
+- Learn more about [soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points).
backup Enable Multi User Authorization Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/enable-multi-user-authorization-quickstart.md
Title: Quickstart - Multi-user authorization using Resource Guard description: In this quickstart, learn how to use Multi-user authorization to protect against unauthorized operation.- Previously updated : 05/05/2022+ Last updated : 09/25/2023
-# Quickstart: Enable protection using Multi-user authorization on Recovery Services vault in Azure Backup
-
-Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization. Learn about [MUA concepts](multi-user-authorization-concept.md).
+# Quickstart: Enable protection using Multi-user authorization in Azure Backup
This quickstart describes how to enable Multi-user authorization (MUA) for Azure Backup.
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+
+>[!Note]
+>MUA is now generally available for both Recovery Services vaults and Backup vaults.
+
+Learn about [MUA concepts](multi-user-authorization-concept.md).
+ ## Prerequisites Before you start:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+ - Ensure the Resource Guard and the Recovery Services vault are in the same Azure region. - Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation. - Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1). - Ensure that you [create a Resource Guard](multi-user-authorization.md#create-a-resource-guard) in a different subsctiption/tenant as that of the vault located in the same region. - Ensure to [assign permissions to the Backup admin on the Resource Guard to enable MUA](multi-user-authorization.md#assign-permissions-to-the-backup-admin-on-the-resource-guard-to-enable-mua).
+# [Backup vault](#tab/backup-vault)
+
+- Ensure the Resource Guard and the Backup vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions contain the Backup vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the provider - **Microsoft.DataProtection**4. For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+++ ## Enable MUA
-The Backup admin now has the Reader role on the Resource Guard and can easily enable multi-user authorization on vaults managed by them.
+Once the Backup admin has the Reader role on the Resource Guard, they can enable multi-user authorization on vaults managed by following these steps:
+
+**Choose a vault**
-Follow these steps:
+# [Recovery Services vault](#tab/recovery-services-vault)
-1. Go to the Recovery Services vault.
-1. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
-1. The option to enable MUA appears. Choose a Resource Guard using one of the following ways:
+1. Go to the Recovery Services vault for which you want to configure MUA.
- 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+1. On the left pane, select **Properties**.
- 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
1. Click **Select Resource Guard**
- 1. Click on the dropdown and select the directory the Resource Guard is in.
- 1. Click **Authenticate** to validate your identity and access.
+ 1. Select the dropdown list and choose the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
1. After authentication, choose the **Resource Guard** from the list displayed.
-1. Click **Save** once done to enable MUA.
+1. Select **Save** to enable MUA.
+
+# [Backup vault](#tab/backup-vault)
+
+1. Go to the Backup vault for which you want to configure MUA.
+1. On the left panel, select **Properties**.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**.
+ 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+1. Select **Save** to enable MUA.
++ ## Next steps - [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)-- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- [Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval)-- [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
+- Disable MUA on a [Recovery Services vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-recovery-services-vault#disable-mua-on-a-recovery-services-vault) or a [Backup vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-backup-vault#disable-mua-on-a-backup-vault).
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Previously updated : 09/15/2022 Last updated : 09/25/2023
-# Multi-user authorization using Resource Guard
+# About Multi-user authorization using Resource Guard
Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization. >[!Note]
->Multi-user authorization using Resource Guard for Backup vault is in preview.
+>Multi-user authorization using Resource Guard for Backup vault is now generally available.
## How does MUA for Backup work?
Modify protection (reduced retention) | Optional
Stop protection with delete data | Optional Change MARS security PIN | Optional
-# [Backup vault (preview)](#tab/backup-vault)
+# [Backup vault](#tab/backup-vault)
**Operation** | **Mandatory/ Optional** |
The following table lists the scenarios for creating your Resource Guard and vau
**Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes** | | | |
-Vault and Resource Guard are **in the same subscription.** </br> The Backup admin does't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
+Vault and Resource Guard are **in the same subscription.** </br> The Backup admin doesn't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
Vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin doesn't have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription. Vault and Resource Guard are **in different tenants.** </br> The Backup admin doesn't have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
backup Multi User Authorization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-tutorial.md
Title: Tutorial - Enable Multi-user authorization using Resource Guard
-description: In this tutorial, you'll learn about how create a resource guard and enable Multi-user authorization on Recovery Services vault for Azure Backup.
+description: In this tutorial, you'll learn about how create a resource guard and enable Multi-user authorization on Recovery Services vault and Backup vault for Azure Backup.
Previously updated : 05/05/2022 Last updated : 09/25/2023 # Tutorial: Create a Resource Guard and enable Multi-user authorization in Azure Backup
-This tutorial describes how to create a Resource Guard and enable Multi-user authorization on a Recovery Services vault. This adds an additional layer of protection to critical operations on your Recovery Services vaults.
-
-This tutorial includes the following:
-
->[!div class="checklist"]
->- Prerequisies
->- Create a Resource Guard
->- Enable MUA on a Recovery Services vault
+This tutorial describes how to create a Resource Guard and enable Multi-user authorization (MUA) on a Recovery Services vault and Backup vault. This adds an additional layer of protection to critical operations on your vaults.
>[!NOTE]
-> Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization is now generally available for both Recovery Services vaults and Backup vaults.
+>- Multi-user authorization for Azure Backup is available in all public Azure regions.
+
+Learn about [MUA concepts](multi-user-authorization-concept.md).
## Prerequisites Before you start:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+ - Ensure the Resource Guard and the Recovery Services vault are in the same Azure region. - Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation. - Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+# [Backup vault](#tab/backup-vault)
+
+- Ensure the Resource Guard and the Backup vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions contain the Backup vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the provider - **Microsoft.DataProtection**4. For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+++ Learn about various [MUA usage scenarios](multi-user-authorization-concept.md#usage-scenarios). ## Create a Resource Guard
+The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault.
+ >[!Note]
->The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
+> The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
>
->Create the Resource Guard in a tenant different from the vault tenant.
-Follow these steps:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+To create the Resource Guard in a tenant different from the vault tenant as a Security admin, follow these steps:
1. In the Azure portal, go to the directory under which you wish to create the Resource Guard. 1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down.
- - Click **Create** to start creating a Resource Guard.
- - In the create blade, fill in the required details for this Resource Guard.
+ 1. Select **Create** to start creating a Resource Guard.
+ 1. In the **Create** blade, fill in the required details for this Resource Guard.
- Make sure the Resource Guard is in the same Azure regions as the Recovery Services vault. - Also, it is helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
Follow these steps:
You can also [select the operations to be protected after creating the resource guard](#select-operations-to-protect-using-resource-guard). 1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Click **Review + Create**.
-1. Follow notifications for status and successful creation of the Resource Guard.
+1. Select **Review + Create** and then follow notifications for status and successful creation of the Resource Guard.
+
+# [Backup vault](#tab/backup-vault)
+
+To create the Resource Guard in a tenant different from the vault tenant as a Security admin, follow these steps:
+
+1. In the Azure portal, go to the directory under which you want to create the Resource Guard.
+
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the dropdown list.
+
+ 1. Select **Create** to create a Resource Guard.
+ 1. In the **Create** blade, fill in the required details for this Resource Guard.
+ - Ensure that the Resource Guard is in the same Azure regions as the Backup vault.
+ - Add a description on how to request access to perform actions on associated vaults when needed. This description appears in the associated vaults to guide the Backup admin on how to get the required permissions.
+
+1. On the **Protected operations** tab, select the operations you need to protect using this resource guard under the **Backup vault** tab.
+
+ Currently, the **Protected operations** tab includes only the *Delete backup instance* option to disable.
+
+ You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard).
+
+1. Optionally, add any tags to the Resource Guard as per the requirements.
+1. Select **Review + Create** and then follow the notifications to monitor the status and a successful creation of the Resource Guard.
++ ### Select operations to protect using Resource Guard
->[!Note]
->Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps:
+After vault creation, the Security admin can also choose the operations for protection using the Resource Guard among all supported critical operations. By default, all supported critical operations are enabled. However, the Security admin can exempt certain operations from falling under the purview of MUA using Resource Guard.
+
+**Choose a vault**
-Follow these steps:
+# [Recovery Services vault](#tab/recovery-services-vault)
-1. In the Resource Guard created above, go to **Properties**.
+To select the operations for protection, follow these steps:
+
+1. In the Resource Guard created above, go to **Properties** > **Recovery Services vault** tab.
1. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. >[!Note]
Follow these steps:
1. Optionally, you can also update the description for the Resource Guard using this blade. 1. Select **Save**.
+# [Backup vault](#tab/backup-vault)
+
+To select the operations for protection, follow these steps:
+
+1. In the Resource Guard that you've created, go to **Properties** > **Backup vault** tab.
+1. Select **Disable** for the operations that you want to exclude from being authorized.
+
+ You can't disable the **Remove MUA protection** and **Disable soft delete** operations.
+
+1. Optionally, in the **Backup vaults** tab, update the description for the Resource Guard.
+1. Select **Save**.
+++ ## Assign permissions to the Backup admin on the Resource Guard to enable MUA
->[!Note]
->To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
+The Backup admin must have **Reader** role on the Resource Guard or subscription that contains the Resource Guard to enable MUA on a vault. The Security admin needs to assign this role to the Backup admin.
+
+**Choose a vault**
-Follow these steps:
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+To assign the **Reader** role on the Resource Guard, follow these steps:
1. In the Resource Guard created above, go to the Access Control (IAM) blade, and then go to **Add role assignment**.
-1. Select **Reader** from the list of built-in roles and click **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles and select **Next**.
1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. Since the Backup admin is in another tenant in this case, they will be added as guests to the tenant containing the Resource Guard. 1. Click **Select** and then proceed to **Review + assign** to complete the role assignment.
-## Enable MUA on a Recovery Services vault
+# [Backup vault](#tab/backup-vault)
->[!Note]
->The Backup admin now has the Reader role on the Resource Guard and can easily enable multi-user authorization on vaults managed by them and performs the following steps.
+To assign the **Reader** role on the Resource Guard, follow these steps:
+
+1. In the Resource Guard created above, go to the **Access Control (IAM)** blade, and then go to **Add role assignment**.
+
+
+1. Select **Reader** from the list of built-in roles and select **Next**.
+
+1. Click **Select members** and add the Backup admin's email ID to assign the **Reader** role.
+
+ As the Backup admins are in another tenant, they'll be added as guests to the tenant that contains the Resource Guard.
+
+1. Click **Select** > **Review + assign** to complete the role assignment.
++++
+## Enable MUA on a vault
+
+Once the Backup admin has the Reader role on the Resource Guard, they can enable multi-user authorization on vaults managed by following these steps:
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
1. Go to the Recovery Services vault.
-1. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. Go to **Properties** > **Multi-User Authorization**, and then select **Update**.
1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways: 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
Follow these steps:
1. Or you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region. 1. Click **Select Resource Guard**
- 1. Click on the dropdown and select the directory the Resource Guard is in.
- 1. Click **Authenticate** to validate your identity and access.
+ 1. Select the dropdown list and choose the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
1. After authentication, choose the **Resource Guard** from the list displayed.
-1. Click **Save** once done to enable MUA.
+1. Select **Save** to enable MUA.
+
+# [Backup vault](#tab/backup-vault)
+
+1. Go to the Backup vault for which you want to configure MUA.
+1. On the left panel, select **Properties**.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**.
+ 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+1. Select **Save** to enable MUA.
++ ## Next steps - [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)-- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- [Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval)-- [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
+- Disable MUA on a [Recovery Services vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-recovery-services-vault#disable-mua-on-a-recovery-services-vault) or a [Backup vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-backup-vault#disable-mua-on-a-backup-vault).
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Configure Multi-user authorization using Resource Guard
description: This article explains how to configure Multi-user authorization using Resource Guard. zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault Previously updated : 11/08/2022 Last updated : 09/25/2023
This article describes how to configure Multi-user authorization (MUA) for Azure
This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
-This document includes the following sections:
-
->[!div class="checklist"]
->- Before you start
->- Testing scenarios
->- Create a Resource Guard
->- Enable MUA on a Recovery Services vault
->- Protected operations on a vault using MUA
->- Authorize critical operations on a vault
->- Disable MUA on a Recovery Services vault
- >[!NOTE]
-> Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization using Resource Guard for Backup vault is now generally available. [Learn more](multi-user-authorization.md?pivots=vaults-backup-vault).
## Before you start
To create the Resource Guard in a tenant different from the vault tenant, follow
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings.":::
-1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
+1. Search for **Resource Guards** in the search bar, and then select the corresponding item from the drop-down list.
- :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+ :::image type="content" source="./media/multi-user-authorization/resource-guards.png" alt-text="Screenshot shows how to search resource guards." lightbox="./media/multi-user-authorization/resource-guards.png":::
- Select **Create** to start creating a Resource Guard. - In the create blade, fill in the required details for this Resource Guard.
To create the Resource Guard in a tenant different from the vault tenant, follow
You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard). 1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Select **Review + Create**.
-
- Follow notifications for status and successful creation of the Resource Guard.
+1. Select **Review + Create** and follow notifications for status and successful creation of the Resource Guard.
# [PowerShell](#tab/powershell)
Choose the operations you want to protect using the Resource Guard out of all su
To exempt operations, follow these steps:
-1. In the Resource Guard created above, go to **Properties**.
+1. In the Resource Guard created above, go to **Properties** > **Recovery Services vault** tab.
2. Select **Disable** for operations that you want to exclude from being authorized using the Resource Guard. >[!Note]
To enable MUA on a vault, the admin of the vault must have **Reader** role on th
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control.":::
-1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles, and select **Next**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
After the Reader role assignment on the Resource Guard is complete, enable multi
To enable MUA on the vaults, follow these steps.
-1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and select **Update**.
:::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault properties."::: 1. Now, you're presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
- 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+ - You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
:::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png":::
- 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
1. Click **Select Resource Guard**
- 1. Click on the dropdown and select the directory the Resource Guard is in.
- 1. Click **Authenticate** to validate your identity and access.
+ 1. Select the dropdown list, and then choose the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
1. After authentication, choose the **Resource Guard** from the list displayed. :::image type="content" source="./media/multi-user-authorization/testvault1-multi-user-authorization-inline.png" alt-text="Screenshot showing multi-user authorization." lightbox="./media/multi-user-authorization/testvault1-multi-user-authorization-expanded.png" :::
Depicted below is an illustration of what happens when the Backup admin tries to
:::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the Test Vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
-## Authorize critical (protected) operations using Azure AD Privileged Identity Management
+## Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management
The following sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). >[!NOTE] >Though using Azure AD PIM is the recommended approach, you can use manual or custom methods to manage access for the Backup admin on the Resource Guard. For managing access to the Resource Guard manually, use the ΓÇÿAccess control (IAM)ΓÇÖ setting on the left navigation bar of the Resource Guard and grant the **Contributor** role to the Backup admin.
-### Create an eligible assignment for the Backup admin (if using Azure AD Privileged Identity Management)
+### Create an eligible assignment for the Backup admin (if using Azure Active Directory Privileged Identity Management)
The Security admin can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation. To do so, the **security admin** performs the following:
By default, the setup above may not have an approver (and an approval flow requi
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add contributor.":::
-1. If the setting named **Approvers** shows *None* or displays incorrect approvers, select **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
+1. If the setting named **Approvers** shows *None* or display incorrect approver(s), select **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
1. On the **Activation** tab, select **Require approval to activate** and add the approver(s) who need to approve each request. You can also select other security options like using MFA and mandating ticket options to activate the Contributor role. Optionally, select relevant settings on the **Assignment** and **Notification** tabs as per your requirements.
The tenant ID is required if the resource guard exists in a different tenant.
::: zone pivot="vaults-backup-vault"
-This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Backup vault (preview).
-
->[!Note]
->Multi-user authorization using Resource Guard for Backup vault is in preview.
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Backup vault.
This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
-This document includes the following sections:
-
->[!div class="checklist"]
->- Before you start
->- Testing scenarios
->- Create a Resource Guard
->- Enable MUA on a Backup vault
->- Protected operations on a vault using MUA
->- Authorize critical operations on a vault
->- Disable MUA on a Backup vault
- >[!NOTE]
->Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization using Resource Guard for Backup vault is now generally available.
+>- Multi-user authorization for Azure Backup is available in all public Azure regions.
## Before you start
To create the Resource Guard in a tenant different from the vault tenant as a Se
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings to configure for Backup vault.":::
-1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
+1. Search for **Resource Guards** in the search bar, and then select the corresponding item from the dropdown list.
- :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards for Backup vault." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+ :::image type="content" source="./media/multi-user-authorization/resource-guards.png" alt-text="Screenshot showing resource guards for Backup vault." lightbox="./media/multi-user-authorization/resource-guards.png":::
1. Select **Create** to create a Resource Guard. 1. In the Create blade, fill in the required details for this Resource Guard.
- - Ensure that the Resource Guard is in the same Azure regions as the Backup vault.
+ - Ensure that the Resource Guard is in the same Azure region as the Backup vault.
- Add a description on how to request access to perform actions on associated vaults when needed. This description appears in the associated vaults to guide the Backup admin on how to get the required permissions. 1. On the **Protected operations** tab, select the operations you need to protect using this resource guard under the **Backup vault** tab.
To create the Resource Guard in a tenant different from the vault tenant as a Se
:::image type="content" source="./media/multi-user-authorization/backup-vault-select-operations-for-protection.png" alt-text="Screenshot showing how to select operations for protecting using Resource Guard."::: 1. Optionally, add any tags to the Resource Guard as per the requirements.
-1. Select **Review + Create** and then follow the notifications to monitor the status and a successful creation of the Resource Guard.
+1. Select **Review + Create** and then follow the notifications to monitor the status and the successful creation of the Resource Guard.
### Select operations to protect using Resource Guard
To select the operations for protection, follow these steps:
1. In the Resource Guard that you've created, go to **Properties** > **Backup vault** tab. 1. Select **Disable** for the operations that you want to exclude from being authorized.
- You can't disable the **Remove MUA protection** operation.
+ You can't disable the **Remove MUA protection** and **Disable soft delete** operations.
1. Optionally, in the **Backup vaults** tab, update the description for the Resource Guard. 1. Select **Save**.
To assign the **Reader** role on the Resource Guard, follow these steps:
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control for Backup vault.":::
-1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles, and select **Next**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment for Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
Once the Backup admin has the Reader role on the Resource Guard, they can enable
1. To enable MUA and choose a Resource Guard, perform one of the following actions:
- - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same region as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
:::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard for Backup vault protection." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png"::: - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region. 1. Click **Select Resource Guard**.
- 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select the dropdown and select the directory the Resource Guard is in.
1. Select **Authenticate** to validate your identity and access. 1. After authentication, choose the **Resource Guard** from the list displayed.
To perform a protected operation (disabling MUA), follow these steps:
:::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the test Backup vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
-## Authorize critical (protected) operations using Azure AD Privileged Identity Management
+## Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management
-There are scenarios where you may need to perform critical operations on your backups and you can perform them with the right approvals or permissions with MUA. The following sections explain on how to authorize the critical operation requests using Privileged Identity Management (PIM).
+There are scenarios where you may need to perform critical operations on your backups and you can perform them with the right approvals or permissions with MUA. The following sections explain how to authorize the critical operation requests using Privileged Identity Management (PIM).
The Backup admin must have a Contributor role on the Resource Guard to perform critical operations in the Resource Guard scope. One of the ways to allow just-in-time (JIT) operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). >[!NOTE]
->We recommend to use the Azure AD PIM. However, you can also use manual or custom methods to manage access for the Backup admin on the Resource Guard. To manually manage access to the Resource Guard, use the *Access control (IAM)* setting on the left pane of the Resource Guard and grant the **Contributor** role to the Backup admin.
+>We recommend that you use the Azure AD PIM. However, you can also use manual or custom methods to manage access for the Backup admin on the Resource Guard. To manually manage access to the Resource Guard, use the *Access control (IAM)* setting on the left pane of the Resource Guard and grant the **Contributor** role to the Backup admin.
-### Create an eligible assignment for the Backup admin using Azure AD Privileged Identity Management
+### Create an eligible assignment for the Backup admin using Azure Active Directory Privileged Identity Management
The **Security admin** can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation.
By default, the above setup may not have an approver (and an approval flow requi
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add a contributor.":::
-1. Select **Edit** to add the reviewers who must review and approve the activation request for the *Contributor* role in case you find that Approvers show *None* or displays incorrect approvers.
+1. Select **Edit** to add the reviewers who must review and approve the activation request for the *Contributor* role in case you find that Approvers show *None* or display incorrect approver(s).
1. On the **Activation** tab, select **Require approval to activate** to add the approver(s) who must approve each request.
-1. Select security options, such as Multi Factor Authentication (MFA), Mandating ticket. to activate *Contributor* role.
+1. Select security options, such as Multi-Factor Authentication (MFA), Mandating ticket to activate *Contributor* role.
1. Select the appropriate options on **Assignment** and **Notification** tabs as per your requirement. :::image type="content" source="./media/multi-user-authorization/edit-role-settings.png" alt-text="Screenshot showing how to edit the role setting.":::
-1. Select **Update** to complete the set-up of approvers to activate *Contributor* role.
+1. Select **Update** to complete the setup of approvers to activate the *Contributor* role.
### Request activation of an eligible assignment to perform critical operations
Once the Backup admin raises a request for activating the Contributor role, the
To review and approve the request, follow these steps:
-1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md).
+1. In the security tenant, go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
1. Go to **Approve Requests**. 1. Under **Azure resources**, you can see the request awaiting approval.
backup Quick Backup Azure Enable Enhanced Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-azure-enable-enhanced-soft-delete.md
+
+ Title: Quickstart - Enable enhanced soft delete for Azure Backup
+description: This quickstart describes how to enable enhanced soft delete for Azure Backup.
+ Last updated : 09/11/2023+++++
+# Quickstart: Enable enhanced soft delete in Azure Backup
+
+This quickstart describes how to enable enhanced soft delete to protect your data and recover backups, if they're deleted.
+
+[Enhanced soft delete](backup-azure-enhanced-soft-delete-about.md) provides an improvement to the [soft delete](backup-azure-security-feature-cloud.md) capability in Azure Backup that enables you to recover your backup data in case of accidental or malicious deletion. With enhanced soft delete, you get the ability to make soft delete always-on, thus protecting it from being disabled by any malicious actors. So, enhanced soft delete provides better protection for your backups against various threats. This feature also allows you to provide a customizable soft delete retention period for which soft deleted data must be retained.
+
+>[!Note]
+>Once you enable the *always-on* state for soft delete, you can't disable it for that vault.
+
+## Before you start
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.
+- Enhanced soft delete applies to all vaulted workloads alike in Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, and Disk and VM snapshot backups.
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable the *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- There's no retention cost for the default soft delete duration of 14 days for vaulted backup, after which it incurs regular backup cost.
+
+## Enable soft delete with always-on state
+
+Soft delete is enabled by default for all new vaults you create. To make enabled settings irreversible, select **Enable Always-on Soft Delete**.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to **Recovery Services vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-inline.png" alt-text="Screenshot showing you how to open Soft Delete blade." lightbox="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-expanded.png":::
+
+ The soft delete settings for cloud and hybrid workloads are already enabled, unless you've explicitly disabled them earlier.
+
+1. If soft delete settings are disabled for any workload type in the **Soft Delete** blade, select the respective checkboxes to enable them.
+
+ >[!Note]
+ >Enabling soft delete for hybrid workloads also enables other security settings, such as Multi-factor authentication and alert notification for back up of workloads running in the on-premises servers.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >- There is no cost for soft delete for *14* days. However, deleted instances in soft delete state are charged if the soft delete retention period is *>14* days. Learn about [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+ >- Once configured, the soft delete retention period applies to all soft deleted instances of cloud and hybrid workloads in the vault.
+
+1. Select the **Enable Always-on Soft delete** checkbox to enable soft delete and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete.png" alt-text="Screenshot showing you how to enable a;ways-on state of soft delete.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to **Backup vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties.png" alt-text="Screenshot showing you how to open soft delete blade for Backup vault.":::
+
+ Soft delete is enabled by default with the checkboxes selected.
+
+1. If you've explicitly disabled soft delete for any workload type in the **Soft Delete** blade earlier, select the checkboxes to enable them.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >There is no cost for enabling soft delete for *14* days. However, you're charged for the soft delete instances if soft delete retention period is *>14* days. Learn about the [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+
+1. Select the **Enable Always-on Soft Delete** checkbox to enable soft delete always-on and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete-backup-vault.png" alt-text="Screenshot showing you how to enable always-on state for Backup vault.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+++
+## Next steps
+
+- Learn more about [enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
+- Learn more about [soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup
-description: Learn about new features in Azure Backup.
+description: Learn about the new features in Azure Backup.
Previously updated : 09/14/2023 Last updated : 09/29/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - September 2023
+ - [Multi-user authorization using Resource Guard for Backup vault is now generally available](#multi-user-authorization-using-resource-guard-for-backup-vault-is-now-generally-available)
+ - [Enhanced soft delete for Azure Backup is now generally available](#enhanced-soft-delete-for-azure-backup-is-now-generally-available)
- [Support for selective disk backup with enhanced policy for Azure VM is now generally available](whats-new.md#support-for-selective-disk-backup-with-enhanced-policy-for-azure-vm-is-now-generally-available) - August 2023 - [Save your MARS backup passphrase securely to Azure Key Vault (preview)](#save-your-mars-backup-passphrase-securely-to-azure-key-vault-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Multi-user authorization using Resource Guard for Backup vault is now generally available
+
+Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Backup vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
+
+For more information, see [MUA for Backup vault](multi-user-authorization-concept.md?tabs=backup-vault).
+
+## Enhanced soft delete for Azure Backup is now generally available
+
+Enhanced soft delete provides improvements to the existing [soft delete](backup-azure-security-feature-cloud.md) feature. With enhanced soft delete, you now get the ability to make soft delete always-on, thus protecting it from being disabled by any malicious actors.
+
+You can also customize soft delete retention period (for which soft deleted data must be retained). Enhanced soft delete is available for Recovery Services vaults and Backup vaults.
+
+>[!Note]
+>Once you enable the *always-on* state for soft delete, you can't disable it for that vault.
+
+For more information, see [Enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
## Save your MARS backup passphrase securely to Azure Key Vault (preview)
chaos-studio Chaos Studio Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-service-limits.md
Last updated 11/01/2021 -+ # Azure Chaos Studio Preview service limits
-This article provides service limits for Azure Chaos Studio Preview.
+This article provides service limits for Azure Chaos Studio Preview. For more information about Azure-wide service limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
## Experiment and target limits
-Chaos Studio applies limits to the number of objects, duration of activities, and retention of data.
+Chaos Studio applies limits to the number of resources, duration of activities, and retention of data.
-| Limit | Value |
-|--|--|
-| Actions per experiment | 9 |
-| Branches per experiment | 9 |
-| Steps per experiment | 4 |
-| Action duration (hours) | 12 |
-| Concurrent experiments executing per region and subscription | 5 |
-| Total experiment duration (hours) | 12 |
-| Number of experiments per region and subscription | 500 |
-| Number of targets per action | 50 |
-| Number of active agents per target | 1,000 |
-| Number of targets per region and subscription | 10,000 |
+| Limit | Value | Description |
+|--|--|--|
+| Actions per experiment | 9 | The maximum number of actions (such as faults or time delays) in an experiment. |
+| Branches per experiment | 9 | The maximum number of parallel tracks that can execute within an experiment. |
+| Steps per experiment | 4 | The maximum number of steps that execute in series within an experiment. |
+| Action duration (hours) | 12 | The maximum time duration of an individual action. |
+| Total experiment duration (hours) | 12 | The maximum duration of an individual experiment, including all actions. |
+| Concurrent experiments executing per region and subscription | 5 | The number of experiments that can run at the same time within a region and subscription. |
+| Experiment history retention time (days) | 120 | The time period after which individual results of experiment executions are automatically removed. |
+| Number of experiment resources per region and subscription | 500 | The maximum number of experiment resources a subscription can store in a given region. |
+| Number of targets per action | 50 | The maximum number of resources an individual action can target for execution. For example, the maximum Virtual Machines that can be shut down by a single Virtual Machine Shutdown fault. |
+| Number of agents per target | 1,000 | The maximum number of running that can be associated with a single target. For example, the agents running on all instances within a single Virtual Machine Scale Set. |
+| Number of targets per region and subscription | 10,000 | The maximum number of target resources within a single subscription and region. |
## API throttling limits
-Chaos Studio applies limits to all Azure Resource Manager operations. Requests made over the limit are throttled. All request limits are applied for a five-minute interval unless otherwise specified.
+Chaos Studio applies limits to all Azure Resource Manager operations. Requests made over the limit are throttled. All request limits are applied for a **five-minute interval** unless otherwise specified. For more information about Azure Resource Manager requests, see [Throttling Resource Manager requests](../azure-resource-manager/management/request-limits-and-throttling.md).
| Operation | Requests | |--|--|
Chaos Studio applies limits to all Azure Resource Manager operations. Requests m
| Microsoft.Chaos/targets/capabilities/delete | 600 | | Microsoft.Chaos/locations/targetTypes/read | 50 | | Microsoft.Chaos/locations/targetTypes/capabilityTypes/read | 50 |+
chaos-studio Chaos Studio Set Up App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-set-up-app-insights.md
+
+ Title: Set up App Insights for a Chaos Studio agent-based experiment
+description: Understand the steps to connect App Insights to your Chaos Studio Agent-Based Experiment
+++ Last updated : 09/27/2023++++
+# How-to: Configure your experiment to emit Experiment Fault Events to App Insights
+In this guide, we'll show you the steps needed to configure a Chaos Studio **Agent-based** Experiment to emit telemetry to App Insights. These events show the start and stop of each fault as well as the type of fault executed and the resource the fault was executed against. App Insights is the primary recommended logging solution for **Agent-based** experiments in Chaos Studio.
+
+## Prerequisites
+- An Azure subscription
+- An existing Chaos Studio [**Agent-based** Experiment](chaos-studio-tutorial-agent-based-portal.md)
+- [Required for Application Insights Resource as well] An existing [Log Analytics Workspace](../azure-monitor/logs/quick-create-workspace.md)
+- An existing [Application Insights Resource](../azure-monitor/app/create-workspace-resource.md)
+- [Required for Agent-based Chaos Experiments] A [User-Assigned Managed Identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)
+
+## Step 1: Copy the Instrumentation Key from your Application Insights Resource
+Once you have met all the prerequisite steps, copy the **Instrumentation Key** found in the overview page of your Application Insights Resource (see screenshot)
+
+<br/>
+
+[![Screenshot that shows Instrumentation Key in App Insights.](images/step-1a-app-insights.png)](images/step-1a-app-insights.png#lightbox)
+
+## Step 2: Enable the Target Platform for your Agent-Based Fault with Application Insights
+Navigate to the Chaos Studio overview page and click on the **Targets** blade under the "Experiments Management" section. Find the target platform, ensure it's enabled for agent-based faults, and select "Manage Actions" in the right-most column. See screenshot below for an example:
+<br/>
+
+<br/>
+
+[![Screenshot that shows the Chaos Targets Page.](images/step-2a-app-insights.png)](images/step-2a-app-insights.png#lightbox)
+
+## Step 3: Add your Application Insights account and Instrumentation key
+At this point, the resource configuration page seen in the screenshot should come up . After configuring your managed identity, make sure Application Insights is "Enabled" and then select your desired Application Insights Account and enter the Instrumentation Key you copied in Step 1. Once you have filled out the required information, you can click "Review+Create" to deploy your resource.
+
+<br/>
+
+[![Screenshot of Targets Deployment Page.](images/step-3a-app-insights.png)](images/step-3a-app-insights.png#lightbox)
+
+## Step 4: Run the chaos experiment
+At this point, your Chaos Target is now configured to emit telemetry to the App Insights Resource you configured! If you navigate to your specific Application Insights Resource and open the "Logs" blade under the "Monitoring" section, you should see the Agent health status and any actions the Agent is taking on your Target Platform. You can now run your experiment and see logging in your Application Insights Resource. See screenshot for example of App Insights Resource running successfully on an Agent-based Chaos Target platform.
+
+<br/>
+
+To query your logs, navigate to the "Logs" tab in the Application Insights Resource to get your desired logging information your desired format.
+
+<br/>
+
+[![Screenshot of Logs tab in Application Insights Resource.](images/step-4a-app-insights.png)](images/step-4a-app-insights.png#lightbox)
chaos-studio Chaos Studio Set Up Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-set-up-azure-monitor.md
+
+ Title: Set up Azure monitor for a Chaos Studio experiment
+description: Understand the steps to connect Azure Monitor to your Chaos Studio Experiment
+++ Last updated : 09/27/2023++++
+# How-to: Configure your experiment to emit Experiment Fault Events to Azure Monitor
+In this guide, we'll show you the steps needed to integrate an Experiment to emit telemetry to Azure Monitor. These events show the start and stop of each fault as well as the type of fault executed and the resource the fault was executed against. You can overlay this data on top of your existing Azure Monitor or external monitoring dashboards.
+
+## Prerequisites
+- An Azure subscription
+- An existing Chaos Studio Experiment [How to create your first Chaos Experiment](chaos-studio-quickstart-azure-portal.md)
+- An existing Log Analytics Workspace [How to Create a Log Analytics Workspace](../azure-monitor/logs/quick-create-workspace.md)
+
+## Step 1: Navigate to Diagnostic Settings tab in your Chaos Experiment
+Navigate to the Chaos Experiment you want to emit telemetry to Azure Monitor and open it. Then navigate to the "Diagnostic settings" tab under the "Monitoring" section as shown in the below screenshot:
+
+<br/>
+
+[![Screenshot that shows Diagnostic Settings in Chaos Experiment.](images/step-1a.png)](images/step-1a.png#lightbox)
+
+## Step 2: Connect your Chaos Experiment to your desired Log Analytics Workspace
+Once you are in the "Diagnostic Settings" tab within your Chaos Experiment, select "Add Diagnostic Setting."
+Enter the following details:
+1. **Diagnostic Setting Name**: Any String you want, much like a Resource Group Name
+2. **Category Groups**:Choose which category of logging you want to output to the Log Analytics workspace.
+3. **Subscription**: The subscription which includes the Log Analytics Workspace you would like to use
+4. **Log Analytics Workspace**: Where you'll select your desired Log Analytics Workspace
+<br/>
+All the other settings are optional
+<br/>
+
+<br/>
+
+[![Screenshot that shows the Diagnostic Settings blade and required information.](images/step-2a.png)](images/step-2a.png#lightbox)
+
+## Step 3: Run the chaos experiment
+Once you have completed Step 2, your experiment is now configured to emit telemetry to Azure Monitor upon the next Chaos Experiment execution! It typically takes time (20 minutes) for the logs to populate. Once populated you can view the log events from the logs tab. Events include experiment start, stop, and details about the faults executed. You can even turn the logs into chart visualizations or overlay your existing live site visualizations with chaos metadata.
+
+<br/>
+
+To query your logs, navigate to the "Logs" tab in your Chaos Experiment Resource to get your desired logging information in your desired format.
+
+<br/>
+
+[![Screenshot of Logs tab in Chaos Experiment Resource.](images/step-3a.png)](images/step-3a.png#lightbox)
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
description: Walkthrough of how Azure Cloud Shell persists files. ms.contributor: jahelmic Previously updated : 04/25/2023 Last updated : 09/29/2023 tags: azure-resource-manager
This fileshare is used for both Bash and PowerShell.
## Use existing resources
-Using the advanced option, you can associate existing resources. When selecting a Cloud Shell region,
-you must select a backing storage account co-located in the same region. For example, if your
-assigned region is West US then you must associate a fileshare that resides within West US as well.
-
-When the storage setup prompt appears, select **Show advanced settings** to view more options. The
-populated storage options filter for locally redundant storage (LRS), geo-redundant storage (GRS),
-and zone-redundant storage (ZRS) accounts.
+Using the advanced option, you can associate existing resources. When the storage setup prompt
+appears, select **Show advanced settings** to view more options. The populated storage options
+filter for locally redundant storage (LRS), geo-redundant storage (GRS), and zone-redundant storage
+(ZRS) accounts.
> [!NOTE] > Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file
Cloud Shell machines exist in the following regions:
| Europe | North Europe, West Europe | | Asia Pacific | India Central, Southeast Asia |
-Customers should choose a primary region, unless they have a requirement that their data at rest be
-stored in a particular region. If they have such a requirement, a secondary storage region should be
-used.
+You should choose a region that meets your requirements.
### Secondary storage regions
of their fileshare.
## Restrict resource creation with an Azure resource policy
-Storage accounts that you create in Cloud Shell are tagged with
-`ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts
-in Cloud Shell, create an [Azure resource policy for tags][02] that is triggered by this specific
-tag.
+Storage accounts that created in Cloud Shell are tagged with `ms-resource-usage:azure-cloud-shell`.
+If you want to disallow users from creating storage accounts in Cloud Shell, create an
+[Azure resource policy for tags][02] that's triggered by this specific tag.
## How Cloud Shell storage works
cloud-shell Quickstart Deploy Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md
description: This article provides step-by-step instructions to deploy Azure Cloud Shell in a private virtual network. ms.contributor: jahelmic Previously updated : 06/29/2023 Last updated : 09/29/2023 Title: Deploy Azure Cloud Shell in a VNET with quickstart templates
+ Title: Deploy Azure Cloud Shell in a virtual network with quickstart templates
-# Deploy Azure Cloud Shell in a VNET with quickstart templates
+# Deploy Azure Cloud Shell in a virtual network with quickstart templates
-Before you can deploy Azure Cloud Shell in a virtual network (VNET) configuration using the
+Before you can deploy Azure Cloud Shell in a virtual network (VNet) configuration using the
quickstart templates, there are several prerequisites to complete before running the templates. This document guides you through the process to complete the configuration.
-## Steps to deploy Azure Cloud Shell in a VNET
+## Steps to deploy Azure Cloud Shell in a virtual network
-This article walks you through the following steps to deploy Azure Cloud Shell in a VNET:
+This article walks you through the following steps to deploy Azure Cloud Shell in a virtual network:
1. Collect the required information
-1. Provision the virtual networks using the **Azure Cloud Shell - VNet** ARM template
-1. Provision the VNET storage account using the **Azure Cloud Shell - VNet storage** ARM template
-1. Configure and use Azure Cloud Shell in a VNET
+1. Create the virtual networks using the **Azure Cloud Shell - VNet** ARM template
+1. Create the virtual network storage account using the **Azure Cloud Shell - VNet storage** ARM template
+1. Configure and use Azure Cloud Shell in a virtual network
## 1. Collect the required information There are several pieces of information that you need to collect before you can deploy Azure Cloud. You can use the default Azure Cloud Shell instance to gather the required information and create the
-necessary resources. You should create dedicated resources for the Azure Cloud Shell VNET
+necessary resources. You should create dedicated resources for the Azure Cloud Shell VNet
deployment. All resources must be in the same Azure region and contained in the same resource group. - **Subscription** - The name of your subscription containing the resource group used for the Azure
- Cloud Shell VNET deployment
-- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNET deployment
+ Cloud Shell VNet deployment
+- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNet deployment
- **Region** - The location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNET
+- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNet
- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group - **Azure Relay Namespace** - The name that you want to assign to the Relay resource created by the template
Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
### Azure Container Instance ID
-To configure the VNET for Cloud Shell using the quickstarts, retrieve the `Azure Container Instance`
+To configure the virtual network for Cloud Shell using the quickstarts, retrieve the `Azure Container Instance`
ID for your organization. ```powershell
Azure Container Instance Service 8fe7fd25-33fe-4f89-ade3-0e705fcf4370 34fbe509-d
Take note of the **Id** value for the `Azure Container Instance` service principal. It's needed for the **Azure Cloud Shell - VNet storage** template.
-## 2. Provision the virtual network using the ARM template
+## 2. Create the virtual network using the ARM template
Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual network. The template creates three subnets under the virtual network created earlier. You may choose to change the supplied names of the subnets or use the defaults. The virtual network, along
-with the subnets, require valid IP address assignments.
+with the subnets, require valid IP address assignments. You need at least one IP address for the
+Relay subnet and enough IP addresses in the container subnet to support the number of concurrent
+sessions you expect to use.
The ARM template requires specific information about the resources you created earlier, along with naming information for new resources. This information is filled out along with the prefilled
information in the form.
Information needed for the template: - **Subscription** - The name of your subscription containing the resource group for Azure Cloud
- Shell VNET
+ Shell VNet
- **Resource Group** - The resource group name of either an existing or newly created resource group - **Region** - Location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNET
+- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell virtual network
- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group Fill out the form with the following information:
Fill out the form with the following information:
| Instance details | Value | | - | - | | Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
-| Existing VNET Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. |
+| Existing virtual network Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. |
| Relay Namespace Name | Create a name that you want to assign to the Relay resource created by the template.<br>For this example, we're using `arn-cloudshell-eastus`. | | Azure Container Instance OID | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. | | Container Subnet Name | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
Fill out the form with the following information:
Once the form is complete, select **Review + Create** and deploy the network ARM template to your subscription.
-## 3. Provision the VNET storage using the ARM template
+## 3. Create the virtual network storage using the ARM template
Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual
-network. The template creates the storage account and assigns it to the private VNET.
+network. The template creates the storage account and assigns it to the private virtual network.
The ARM template requires specific information about the resources you created earlier, along with naming information for new resources.
with naming information for new resources.
Information needed for the template: - **Subscription** - The name of the subscription containing the resource group for Azure Cloud
- Shell VNET.
+ Shell virtual network.
- **Resource Group** - The resource group name of either an existing or newly created resource group - **Region** - Location of the resource group-- **Existing VNET name** - The name of the virtual network created earlier
+- **Existing virtual network name** - The name of the virtual network created earlier
- **Existing Storage Subnet Name** - The name of the storage subnet created with the Network quickstart template - **Existing Container Subnet Name** - The name of the container subnet created with the Network
Fill out the form with the following information:
| Instance details | Value | | | | | Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
-| Existing VNET Name | For this example, we're using `vnet-cloudshell-eastus`. |
+| Existing virtual network Name | For this example, we're using `vnet-cloudshell-eastus`. |
| Existing Storage Subnet Name | Fill in the name of the resource created by the network template. | | Existing Container Subnet Name | Fill in the name of the resource created by the network template. | | Storage Account Name | Create a name for the new storage account.<br>For this example, we're using `myvnetstorage1138`. |
subscription.
## 4. Configuring Cloud Shell to use a virtual network
-After deploying your private Cloud Shell instance, each Cloud Shell user must change their
+After you have deployed your private Cloud Shell instance, each Cloud Shell user must change their
configuration to use the new private instance. If you have used the default Cloud Shell before deploying the private instance, you must reset your
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
description: This article covers troubleshooting Cloud Shell common scenarios. ms.contributor: jahelmic Previously updated : 05/03/2023 Last updated : 09/29/2023 tags: azure-resource-manager
Azure Cloud Shell has the following known limitations:
### Quota limitations
-Azure Cloud Shell has a limit of 20 concurrent users per tenant per region. Opening more than 20
-simultaneous sessions produces a "Tenant User Over Quota" error. If you have a legitimate need to
-have more than 20 sessions open, such as for training sessions, contact Support to request a quota
-increase before your anticipated usage.
+Azure Cloud Shell has a limit of 20 concurrent users per tenant. Opening more than 20 simultaneous
+sessions produces a "Tenant User Over Quota" error. If you have a legitimate need to have more than
+20 sessions open, such as for training sessions, contact Support to request a quota increase before
+your anticipated usage.
Cloud Shell is provided as a free service for managing your Azure environment. It's not as a general purpose computing platform. Excessive automated usage may be considered in breach to the Azure Terms
considerations include:
- With mounted storage, only modifications within the `clouddrive` directory are persisted. In Bash, your `$HOME` directory is also persisted.-- Azure fileshares can be mounted only from within your [assigned region][05].
- - In Bash, run `env` to find your region set as `ACC_LOCATION`.
- Azure Files supports only locally redundant storage and geo-redundant storage accounts. ### Browser support
Azure Cloud Shell in Azure Government is only accessible through the Azure porta
<!-- link references --> [04]: https://docs.docker.com/desktop/
-[05]: persisting-shell-storage.md#mount-a-new-clouddrive
[06]: /powershell/microsoftgraph/migration-steps
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
As part of this preview, the Azure Communication Services SDKs can be used to bu
To enable calling and chat between your Communication Services users and Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource. --
-## Enable interoperability in your Teams tenant
-Azure AD user with [Teams administrator role](../../../active-directory/roles/permissions-reference.md#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant.
-
-### 1. Prepare the Microsoft Teams module
-
-First, open the PowerShell and validate the existence of the Teams module with the following command:
-
-```script
-Get-module *teams*
-```
-
-If you don't see the `MicrosoftTeams` module, install it first. To install the module, you need to run PowerShell as an administrator. Then run the following command:
-
-```script
- Install-Module -Name MicrosoftTeams
-```
-
-You'll be informed about the modules that will be installed, which you can confirm with a `Y` or `A` answer. If the module is installed but is outdated, you can run the following command to update the module:
-
-```script
- Update-Module MicrosoftTeams
-```
-
-### 2. Connect to Microsoft Teams module
-
-When the module is installed and ready, you can connect to the MicrosftTeams module with the following command. You'll be prompted with an interactive window to log in. The user account that you're going to use needs to have Teams administrator permissions. Otherwise, you might get an `access denied` response in the next steps.
-
-```script
-Connect-MicrosoftTeams
-```
-
-### 3. Enable tenant configuration
-
-Interoperability with Communication Services resources is controlled via tenant configuration and assigned policy. Teams tenant has a single tenant configuration, and Teams users have assigned global policy or custom policy. The following table shows possible scenarios and impacts on interoperability.
-
-| Tenant configuration | Global policy | Custom policy | Assigned policy | Interoperability |
-| | | | | |
-| True | True | True | Global | **Enabled** |
-| True | True | True | Custom | **Enabled** |
-| True | True | False | Global | **Enabled** |
-| True | True | False | Custom | Disabled |
-| True | False | True | Global | Disabled |
-| True | False | True | Custom | **Enabled** |
-| True | False | False | Global | Disabled |
-| True | False | False | Custom | Disabled |
-| False | True | True | Global | Disabled |
-| False | True | True | Custom | Disabled |
-| False | True | False | Global | Disabled |
-| False | True | False | Custom | Disabled |
-| False | False | True | Global | Disabled |
-| False | False | True | Custom | Disabled |
-| False | False | False | Global | Disabled |
-| False | False | False | Custom | Disabled |
-
-After successful login, you can run the cmdlet [Set-CsTeamsAcsFederationConfiguration](/powershell/module/teams/set-csteamsacsfederationconfiguration) to enable Communication Services resource in your tenant. Replace the text `IMMUTABLE_RESOURCE_ID` with an immutable resource ID in your communication resource. You can find more details on how to get this information [here](../troubleshooting-info.md#getting-immutable-resource-id).
-
-```script
-$allowlist = @('IMMUTABLE_RESOURCE_ID')
-Set-CsTeamsAcsFederationConfiguration -EnableAcsUsers $True -AllowedAcsResources $allowlist
-```
-
-### 4. Enable tenant policy
-
-Each Teams user has assigned an `External Access Policy` that determines whether Communication Services users can call this Teams user. Use cmdlet
-[Set-CsExternalAccessPolicy](/powershell/module/skype/set-csexternalaccesspolicy) to ensure that the policy assigned to the Teams user has set `EnableAcsFederationAccess` to `$true`
-
-```script
-Set-CsExternalAccessPolicy -Identity Global -EnableAcsFederationAccess $true
-```
-- ## Get Teams user ID To start a call or chat with a Teams user or Teams Voice application, you need an identifier of the target. You have the following options to retrieve the ID:
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
Last updated 03/01/2023
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Spotlight states
-In this article, you'll learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone.
-
+In this article, you learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone.
Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight. ++++ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Since the video stream resolution of a participant is increased when spotlighted
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
-Communication Services or Microsoft 365 users can call the spotlight APIs based on role type and conversation type
-
-**In a one to one call or group call scenario, the following APIs are supported for both Communication Services and Microsoft 365 users**
-|APIs| Organizer | Presenter | Attendee |
-|-|--|--|--|
-| startSpotlight | ✔️ | ✔️ | ✔️ |
-| stopSpotlight | ✔️ | ✔️ | ✔️ |
-| stopAllSpotlight | ✔️ | ✔️ | ✔️ |
-| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
-**For meeting scenario the following APIs are supported for both Communication Services and Microsoft 365 users**
-|APIs| Organizer | Presenter | Attendee |
-|-|--|--|--|
-| startSpotlight | ✔️ | ✔️ | |
-| stopSpotlight | ✔️ | ✔️ | ✔️ |
-| stopAllSpotlight | ✔️ | ✔️ | |
-| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
## Next steps - [Learn how to manage calls](./manage-calls.md)
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
In this quickstart you are going to learn how to start a call from Azure Communi
If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling). + ## Create or select Teams Auto Attendant Teams Auto Attendant is system that provides an automated call handling system for incoming calls. It serves as a virtual receptionist, allowing callers to be automatically routed to the appropriate person or department without the need for a human operator. You can select existing or create new Auto Attendant via [Teams Admin Center](https://aka.ms/teamsadmincenter).
In results we'll are able to find "ID" field
"id": "31a011c2-2672-4dd0-b6f9-9334ef4999db" ``` ## Clean up resources
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
In this quickstart you are going to learn how to start a call from Azure Communi
If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling). + ## Create or select Teams Call Queue Teams Call Queue is a feature in Microsoft Teams that efficiently distributes incoming calls among a group of designated users or agents. It's useful for customer support or call center scenarios. Calls are placed in a queue and assigned to the next available agent based on a predetermined routing method. Agents receive notifications and can handle calls using Teams' call controls. The feature offers reporting and analytics for performance tracking. It simplifies call handling, ensures a consistent customer experience, and optimizes agent productivity. You can select existing or create new Call Queue via [Teams Admin Center](https://aka.ms/teamsadmincenter).
communication-services Contact Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md
+
+ Title: Contact centers with Azure Communication Services
+description: Learn concepts for contact center apps
+++++ Last updated : 09/27/2023+++++
+# Contact center
+
+This tutorial describes concepts for **contact center** applications. After completing this you'll understand common use cases that a contact center application delivers, the Microsoft technologies that can help you build those uses cases and have built a sample application integrating Microsoft Teams and Azure that you can use to demo and explore further.
+
+Contact center applications are focused on unscheduled communication between **consumers** and **agents**. The **organizational boundary** between consumers and agents, and the **unscheduled** nature of the interaction, are key attributes of contact center applications.
+
+Azure and Teams are interoperable. This interoperability gives organizations choice in how they interact with customers using the Microsoft Cloud. Three examples include:
+
+- **Teams Phone** provides a zero-code suite for customer communication using [Teams Cloud Auto attendants and Call queues](/microsoftteams/plan-auto-attendant-call-queue) and [Click-to-call](https://techcommunity.microsoft.com/t5/microsoft-teams-blog/what-s-new-in-microsoft-teams-at-enterprise-connect-2023/ba-p/3774374).
+- **Teams + Azure hybrid.** Combine Teams with a custom Azure application to manage or route communication, or for a custom consumer or agent experience. This document currently focuses on these scenarios.
+- **Azure custom.** Build the entire customer engagement experience on Azure primitives ΓÇô the business experience, the consumer experience, the job routing, and the intelligent insights. Azure Communication Services provides several products for custom solutions such as:
+ - [Call Automation](/azure/communication-services/concepts/call-automation/call-automation-teams-interop) ΓÇô Build AI assisted programmable calling workflows
+ - [Job Router](/azure/communication-services/concepts/router/concepts) ΓÇô Match jobs to the most suitable worker
+ - [UI Library](/azure/communication-services/concepts/ui-library/ui-library-overview?pivots=platform-web) ΓÇô Develop custom web and mobile experiences for end users
+
+Developers interested in scheduled business-to-consumer interactions should read our [Virtual Visits](/azure/communication-services/tutorials/virtual-visits) tutorial. This article focuses on *inbound* engagement, where the consumer initiates communication. Many businesses also have *outbound* communication needs, for which we recommend the outbound customer engagement tutorial.
+
+The term ΓÇ£contact centerΓÇ¥ captures a large family of applications diverse across scale, channels, and organizational approach:
+
+- **Scale**. Small businesses may have a small number of employees operating as agents in a limited role, for example a restaurant offering a phone number for reservations. While an airline may have thousands of employees and vendors providing a 24/7 contact center.
+- **Channel**. Organizations can reach consumers through the phone system, apps, SMS, or consumer communication platforms such as WhatsApp.
+- **Organizational approach**. Most businesses have employees operate as agents using Teams or a licensed contact center as a service software (CCaaS). Other businesses may out-source the agent role or use specialized service providers who fully operate contact centers as a service.
+
+## User Personas
+
+No matter the industry, there are at least five personas involved in a contact center and certain tasks they accomplish:
+
+- **Designer**. The designer defines the consumer experience. What consumer questions, interactions, and needs does the contact center solve for? What channels are used? How is the consumer routed to different agent pools using bots or interactive voice response?
+- **Shift Manager**. The shift manager organizes agents. They monitor consumer satisfaction and other business outcomes.
+- **Agent**. The human being who engages consumers.
+- **Expert**. A human being to whom agents escalate
+- **Consumer**. The human being, external to the organization, that initiates communication. Some companies operate internal contact centers, for example an IT support organization that receives requests from users (consumers).
+
+The rest of this article provides the high-level architecture and data flows for two different contact center designs:
+
+1. Consumers going to a website (or mobile app), talking to a chat bot, and then starting a voice call answered by a Teams-hosted agent.
+2. Consumers initializing a voice interaction by calling a phone number from an organizationΓÇÖs TeamΓÇÖs phone system.
+
+These examples build on each other in increasing complexity. GitHub and the Azure Communication Services Sample Builder host sample code that match these simplified architectures.
+
+## Chat on a website with a bot agent
+
+Communication Services Chat applications can be integrated with an Azure Bot Service. The Bot Service needs to be linked to a Communication Services resource using a channel in the Azure Portal. To learn more about this scenario, see [Add a bot to your chat app - An Azure Communication Services quickstart](/azure/communication-services/quickstarts/chat/quickstart-botframework-integration).
+
+![Data flow diagram for chat with a bot agent](media/contact-center/data-flow-diagram-chat-bot.png)
+
+### Dataflow
+
+1. An Azure Communication Services Chat channel is connected to an Azure Bot Service in Azure Portal by an administrator.
+2. A user clicks a widget in a client application to contact an agent.
+3. The Contact Center Service creates a Chat thread and adds the user ID for the bot to the thread.
+4. A user sends and receives messages to the bot using the Azure Communication Services Chat SDK.
+5. The bot sends and receives messages to the user using the Azure Communication Services Chat Channel.
+
+## Chat on a website that escalates to a voice call answered by a Teams agent
+
+A conversation between a user and a bot can be handed off to an agent in Teams. Optionally, a Teams Voice App such as an Auto Attendant or Call Queue can control the transition. To learn more about bot handoff integration models, see [Transition conversations from bot to human - Bot Service](/azure/bot-service/bot-service-design-pattern-handoff-human?view=azure-bot-service-4.0). To learn more about Teams Auto Attendants and Call Queues, see [Plan for Teams Auto attendants and Call queues - Microsoft Teams](/microsoftteams/plan-auto-attendant-call-queue).
+
+![Data flow diagram for chat escalating to a call](media/contact-center/data-flow-diagram-escalate-to-call.png)
+
+### Dataflow
+
+1. A user clicks a widget in the client application to contact an agent.
+2. The Contact Center Service creates a Chat thread and adds an Azure Bot to the thread.
+3. The user interacts with the Azure Bot by sending and receiving Chat messages.
+4. The Contact Center Service hands the user off to a Teams Call Queue or Auto Attendant.
+5. The Teams Voice Apps hands the user off to an employee acting as an agent using Teams. The user and the employee interact using audio, video, and screenshare.
+
+### Detailed capabilities
+
+The following list presents the set of features that are currently available for contact centers in Azure Communication Services. For detailed capability information, see [Azure Communication Services Calling SDK overview](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features). Azure Communication Services Calling to Teams, including Teams Auto Attendant and Call Queue, requires setup to be completed as described in [Teams calling and chat interoperability](/azure/communication-services/concepts/interop/calling-chat).
+
+| Group of features | Capability | Public preview | General availability |
+|-|-|-|-|
+| DTMF Support in ACS UI SDK | Allows touch tone entry | ❌ | ✔️ |
+| Calling Capabilities | Audio and video | ✔️ | ✔️ |
+| | Screen sharing | ✔️ | ✔️ |
+| | Record the call | ✔️ | ✔️ |
+| | Park the call | ❌ | ❌ |
+| | Personal voicemail | ❌ | ✔️ |
+| Teams Auto Attendant | Answer call | ✔️ | ✔️ |
+| | Operator routing | ❌ | ✔️ |
+| | Speech recognition of menu options | ✔️1 | ✔️1 |
+| | Speech recognition of directory search | ✔️1 | ✔️1 |
+| | Power BI Reporting | ❌ | ✔️ |
+| Auto Attendant Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ✔️ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+| Teams Call Queue | Music on hold | ✔️ | ✔️ |
+| | Answer call | ✔️ | ✔️ |
+| | Power BI Reporting | ❌ | ✔️ |
+| Overflow Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ❌ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+| Timeout Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ❌ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+| No Agents Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ❌ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+
+1. Teams Auto Attendant must be voice enabled
+2. Licensing required
+
+### Additional Resources
+
+- [Teams calling and chat interoperability - An Azure Communication Services concept document](/azure/communication-services/concepts/interop/calling-chat)
+- [Quickstart: Join your calling app to a Teams call queue](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue)
+- [Quickstart - Teams Auto Attendant on Azure Communication Services](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant)
+- [Get started with a click to call experience using Azure Communication Services - An Azure Communication Services tutorial](/azure/communication-services/tutorials/calling-widget/calling-widget-overview)
+
+## Extend your contact center voice solution to Teams users
+
+Improve the efficiency of your contact center operations by inviting subject matter experts into your customer service workflows. With Azure Communication Services Call Automation API, developers can add subject matter experts, who use Microsoft Teams, to existing customer service calls to provide expert advice and help agents improve their first call resolution rate.
+This interoperability is offered over VoIP and makes it easy for developers to implement per-region multi-tenant trunks that maximize value and reduce telephony infrastructure overhead.
+
+![Data flow diagram for adding a Teams user to a call](media/contact-center/data-flow-diagram-add-teams-user-to-call.png)
+To learn more about Call Automation API and how a contact center can leverage this interoperability with Teams, see [Deliver expedient customer service by adding Microsoft Teams users in Call Automation workflows](/azure/communication-services/concepts/call-automation/call-automation-teams-interop).
+
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
User Defined Routes (UDR) and controlled egress through NAT Gateway are supporte
- Configuring UDR is done outside of the Container Apps environment scope. -- UDR isn't supported for external environments.- :::image type="content" source="media/networking/udr-architecture.png" alt-text="Diagram of how UDR is implemented for Container Apps."::: Azure creates a default route table for your virtual networks upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. For example, you can create a UDR that routes all traffic to the firewall.
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
For more information on networking concepts in Container Apps, see [Networking E
## Prerequisites
-* **Internal environment**: An internal container app environment on the workload profiles environment that's integrated with a custom virtual network. When you create an internal container app environment, your container app environment has no public IP addresses, and all traffic is routed through the virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles environment](./workload-profiles-manage-cli.md).
+* **Workload profiles environment**: A workload profiles environment that's integrated with a custom virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles environment](./workload-profiles-manage-cli.md?pivots=aca-vnet-custom).
* **`curl` support**: Your container app must have a container that supports `curl` commands. In this how-to, you use `curl` to verify the container app is deployed correctly. If you don't have a container app with `curl` deployed, you can deploy the following container which supports `curl`, `mcr.microsoft.com/k8se/quickstart:latest`.
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
Deploy a default Ubuntu Azure virtual machine with [az vm create][az-vm-create].
az vm create \ --resource-group myResourceGroup \ --name myDockerVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
This article is about role-based access control for data plane operations in Azure Cosmos DB for MongoDB.
-If you are using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
+If you're using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
Azure Cosmos DB for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users and roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or Azure Resource Manager (ARM). ## Concepts ### Resource
-A resource is a collection or database to which we are applying access control rules.
+A resource is a collection or database to which we're applying access control rules.
### Privileges Privileges are actions that can be performed on a specific resource. For example, "read access to collection xyz". Privileges are assigned to a specific role.
Privileges are actions that can be performed on a specific resource. For example
A role has one or more privileges. Roles are assigned to users (zero or more) to enable them to perform the actions defined in those privileges. Roles are stored within a single database. ### Diagnostic log auditing
-An additional column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column will identify which user performed which data plan operation. The value in this column is empty when RBAC is not enabled.
+An another column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column identifies which user performed which data plan operation. The value in this column is empty when RBAC isn't enabled.
## Available Privileges #### Query and Write
An additional column called `userId` has been added to the `MongoRequests` table
* listIndexes ## Built-in Roles
-These roles already exist on every database and do not need to be created.
+These roles already exist on every database and don't need to be created.
### read Has the following privileges: changeStream, collStats, find, killCursors, listIndexes, listCollections
az cloud set -n AzureCloud
az login az account set --subscription <your subscription ID> ```
-3. Enable the RBAC capability on your existing API for MongoDB database account. You'll need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account. RBAC can also be enabled via the features tab in the Azure portal instead.
+3. Enable the RBAC capability on your existing API for MongoDB database account. You need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account. RBAC can also be enabled via the features tab in the Azure portal instead.
If you prefer a new database account instead, create a new database account with the RBAC capability set to true. ```powershell az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl
az cosmosdb mongodb user definition delete --account-name <account-name> --resou
- The number of users and roles you can create must equal less than 10,000. - The commands listCollections, listDatabases, killCursors, and currentOp are excluded from RBAC.-- Users and Roles across databases are not supported.
+- Users and Roles across databases aren't supported.
- A user's password can only be set/reset by through the Azure CLI / Azure PowerShell. - Configuring Users and Roles is only supported through Azure CLI / PowerShell. -- Disabling primary/secondary key authentication is not supported. We recommend rotating your keys to prevent access when enabling RBAC.
+- Disabling primary/secondary key authentication isn't supported. We recommend rotating your keys to prevent access when enabling RBAC.
+- RBAC policies for Cosmos DB for Mongo DB RU won't be automatically reinstated following a restore operation. You'll be required to reconfigure these policies after the restoration process is complete.
## Frequently asked questions (FAQs) ### Is it possible to manage role definitions and role assignments from the Azure portal?
-Azure portal support for role management is not available. However, RBAC can be enabled via the features tab in the Azure portal.
+Azure portal support for role management isn't available. However, RBAC can be enabled via the features tab in the Azure portal.
### How do I change a user's password?
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 09/25/2023 Last updated : 09/27/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that change cluster internals, such as installing a [new minor PostgreSQ
### September 2023
+* General availability: [PostgreSQL 16](https://www.postgresql.org/docs/release/16.0/) support.
+ * See all supported PostgreSQL versions [here](./reference-versions.md#postgresql-versions).
+ * [Upgrade to PostgreSQL 16](./howto-upgrade.md)
+* General availability: [Citus 12.1 with new features and PostgreSQL 16 support](https://www.citusdata.com/updates/v12-1).
* General availability: Data Encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) is now supported for all available regions. * See [this guide](./how-to-customer-managed-keys.md) for the steps to enable data encryption using customer managed keys. * Preview: Geo-redundant backup and restore
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
* [Azure Active Directory (Azure AD) authentication](./concepts-authentication.md#azure-active-directory-authentication-preview) * [Azure CLI support for Azure Cosmos DB for PostgreSQL](/cli/azure/cosmosdb/postgres) * Azure SDKs: [.NET](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDBForPostgreSql/1.0.0-beta.1), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/cosmosforpostgresql/armcosmosforpostgresql@v0.1.0), [Java](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmosdbforpostgresql/1.0.0-beta.1/overview), [JavaScript](https://www.npmjs.com/package/@azure/arm-cosmosdbforpostgresql/v/1.0.0-beta.1), and [Python](https://pypi.org/project/azure-mgmt-cosmosdbforpostgresql/1.0.0b1/)
-* [Data encryption at rest using customer managed keys](./concepts-customer-managed-keys.md)
* [Database audit with pgAudit](./how-to-enable-audit.md) ## Contact us
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 08/24/2023 Last updated : 09/27/2023 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
Azure Cosmos DB for PostgreSQL currently supports a subset of key extensions as
The following tables list the standard PostgreSQL extensions that are supported on Azure Cosmos DB for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
-The versions of each extension installed in a cluster sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+The versions of each extension installed in a cluster sometimes differ based on the version of PostgreSQL (11, 12, 13, 14, 15, or 16). The tables list extension versions per database version.
### Citus extension > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.12 | 10.2.9 | 11.3.0 | 12.0.0 | 12.0.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5 | 10.2 | 11.3 | 12.1 | 12.1 | 12.1 |
### Data types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 |
-> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 | 1.5 |
-> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 | 2.16 |
-> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 | 1.8 |
-> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 |
-> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 | 1.5 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.18 | 2.18 | 2.18 | 2.18 | 2.18 | 2.18 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 | 1.8 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 |
### Full-text search extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Functions extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 |
-> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 |
-> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 |
-> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 |
-> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 15** |
+> |||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.2 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 |
+> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 | 1.0 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | | |
-> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Index types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 | 1.7 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 | 1.7 | 1.7 |
### Language extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Miscellaneous extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 |
-> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 |
-> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |
-> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.2 | 1.2 | 1.2 |
-> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 |
-> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 | 1.1 |
-> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.11 | 1.12 |
+> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 | 1.10 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 | 1.1 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Pgvector extension > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 |
### PostGIS extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
## pg_stat_statements
-The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Cosmos DB for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
+The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Cosmos DB for PostgreSQL cluster to provide you with a means of tracking execution statistics of SQL statements.
The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`.
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 08/24/2023 Last updated : 09/27/2023 # Supported database versions in Azure Cosmos DB for PostgreSQL
customizable during creation. Azure Cosmos DB for PostgreSQL currently supports
following major [PostgreSQL versions](https://www.postgresql.org/docs/release/):
+### PostgreSQL version 16
+
+The current minor release is 16.0. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/16.0/) to
+learn more about improvements and fixes in this minor release.
+ ### PostgreSQL version 15 The current minor release is 15.4. Refer to the [PostgreSQL
policy](https://www.postgresql.org/support/versioning/).
| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | Nov 13, 2025 | | [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | Nov 12, 2026 | | [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/15/release-15.html) | Oct 20, 2022 | Nov 11, 2027 |
+| [PostgreSQL 16](https://www.postgresql.org/about/news/postgresql-16-released-2715/) | [Features](https://www.postgresql.org/docs/16/release-16.html) | Sep 28, 2023 | Nov 9, 2028 |
### Retired PostgreSQL engine versions not supported in Azure Cosmos DB for PostgreSQL
PostgreSQL database version:
Depending on which version of PostgreSQL is running in a cluster, different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, PostgreSQL 14 and PostgreSQL 15 come with Citus 12, PostgreSQL 13 comes with Citus 11, PostgreSQL 12 comes with Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
+will be installed as well. In particular, PostgreSQL 14, PostgreSQL 15, and PostgreSQL 16 come with Citus 12, PostgreSQL 13 comes with Citus 11, PostgreSQL 12 comes with Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
## Next steps
cosmos-db Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-private-access.md
az vm create \
--subnet link-demo-subnet \ --nsg link-demo-nsg \ --public-ip-address link-demo-net-ip \
- --image debian \
+ --image Debian11 \
--admin-username azureuser \ --generate-ssh-keys
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 10/23/2022 Last updated : 09/29/2023 # Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics
Last updated 10/23/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to Azure Blob Storage. It also describes how to use the Data Flow activity to transform data in Azure Blob Storage. To learn more read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
+This article outlines how to use the Copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to Azure Blob Storage. It also describes how to use the Data Flow activity to transform data in Azure Blob Storage. To learn more, read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
>[!TIP] >To learn about a migration scenario for a data lake or a data warehouse, see the article [Migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md).
For the Copy activity, this Blob storage connector supports:
Use the following steps to create an Azure Blob Storage linked service in the Azure portal UI.
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
# [Azure Data Factory](#tab/data-factory)
The following properties are supported for storage account key authentication in
| Property | Description | Required | |: |: |: | | type | The `type` property must be set to `AzureBlobStorage` (suggested) or `AzureStorage` (see the following notes). | Yes |
-| containerUri | Specify the Azure Blob container URI which has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](../storage/blobs/anonymous-read-access-configure.md#set-the-public-access-level-for-a-container) | Yes |
+| containerUri | Specify the Azure Blob container URI that has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](../storage/blobs/anonymous-read-access-configure.md#set-the-anonymous-access-level-for-a-container) | Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | **Example:**
The following properties are supported for storage account key authentication in
**Examples UI**:
-The UI experience will be like below. This sample will use the Azure open dataset as the source. If you want to get the open [dataset bing_covid-19_data.csv](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv), you just need to choose **Authentication type** as **Anonymous** and fill in Container URI with `https://pandemicdatalake.blob.core.windows.net/public`.
+The UI experience is as described in the following image. This sample uses the Azure open dataset as the source. If you want to get the open [dataset bing_covid-19_data.csv](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv), you just need to choose **Authentication type** as **Anonymous** and fill in Container URI with `https://pandemicdatalake.blob.core.windows.net/public`.
:::image type="content" source="media/connector-azure-blob-storage/anonymous-ui.png" alt-text="Screenshot of configuration for Anonymous examples UI.":::
The following properties are supported for storage account key authentication in
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | >[!NOTE]
->A secondary Blob service endpoint is not supported when you're using account key authentication. You can use other authentication types.
+>A secondary Blob service endpoint isn't supported when you're using account key authentication. You can use other authentication types.
>[!NOTE] >If you're using the `AzureStorage` type linked service, it's still supported as is. But we suggest that you use the new `AzureBlobStorage` linked service type going forward.
These properties are supported for an Azure Blob Storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob Storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication isn't supported when account kind is empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes | | servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
These properties are supported for an Azure Blob Storage linked service:
>[!NOTE] >
->- If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication is not supported in Data Flow.
+>- If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication isn't supported in Data Flow.
>- If you access the blob storage through private endpoint using Data Flow, note when service principal authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in your data factory or Synapse workspace to enable access. >[!NOTE]
These properties are supported for an Azure Blob Storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob Storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication isn't supported when account kind is empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | **Example:**
These properties are supported for an Azure Blob Storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob Storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication isn't supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| credentials | Specify the user-assigned managed identity as the credential object. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
These properties are supported for an Azure Blob Storage linked service:
> [!NOTE] >
-> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), system-assigned/user-assigned managed identity authentication is not supported in Data Flow.
+> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), system-assigned/user-assigned managed identity authentication isn't supported in Data Flow.
> - If you access the blob storage through private endpoint using Data Flow, note when system-assigned/user-assigned managed identity authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in ADF to enable access. > [!NOTE]
The following properties are supported for Azure Blob Storage under `storeSettin
| type | The **type** property under `storeSettings` must be set to **AzureBlobStorageReadSettings**. | Yes | | ***Locate the files to copy:*** | | | | OPTION 1: static path<br> | Copy from the given container or folder/file path specified in the dataset. If you want to copy all blobs from a container or folder, additionally specify `wildcardFileName` as `*`. | |
-| OPTION 2: blob prefix<br>- prefix | Prefix for the blob name under the given container configured in a dataset to filter source blobs. Blobs whose names start with `container_in_dataset/this_prefix` are selected. It utilizes the service-side filter for Blob storage, which provides better performance than a wildcard filter.<br><br>When you use prefix and choose to copy to file-based sink with preserving hierarchy, note the sub-path after the last "/" in prefix will be preserved. For example, you have source `container/folder/subfolder/file.txt`, and configure prefix as `folder/sub`, then the preserved file path is `subfolder/file.txt`. | No |
+| OPTION 2: blob prefix<br>- prefix | Prefix for the blob name under the given container configured in a dataset to filter source blobs. Blobs whose names start with `container_in_dataset/this_prefix` are selected. It utilizes the service-side filter for Blob storage, which provides better performance than a wildcard filter.<br><br>When you use prefix and choose to copy to file-based sink with preserving hierarchy, note the sub-path after the last "/" in prefix is preserved. For example, you have source `container/folder/subfolder/file.txt`, and configure prefix as `folder/sub`, then the preserved file path is `subfolder/file.txt`. | No |
| OPTION 3: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters under the given container configured in a dataset to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character). Use `^` to escape if your folder name has wildcard or this escape character inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | | OPTION 3: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given container and folder path (or wildcard folder path) to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character). Use `^` to escape if your file name has a wildcard or this escape character inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
-| OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When you're using this option, do not specify a file name in the dataset. See more examples in [File list examples](#file-list-examples). | No |
+| OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When you're using this option, don't specify a file name in the dataset. See more examples in [File list examples](#file-list-examples). | No |
| ***Additional settings:*** | | | | recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when **recursive** is set to **true** and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. | No |
-| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. | No |
+| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file. Therefore, when the copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on the source store. <br/>This property is only valid in binary files copy scenario. The default value: false. | No |
| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be **NULL**, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is **NULL**, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is **NULL**, the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
-| modifiedDatetimeEnd | Same as above. | No |
-| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No |
-| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it is not specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/>- When you use prefix, partition root path is sub-path before the last "/". <br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path is not specified, no extra column will be generated. | No |
+| modifiedDatetimeEnd | Same as the previous property. | No |
+| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as extra source columns.<br/>Allowed values are **false** (default) and **true**. | No |
+| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/>- When you use prefix, partition root path is sub-path before the last "/". <br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | > [!NOTE]
The following properties are supported for Azure Blob Storage under `storeSettin
| | | -- | | type | The `type` property under `storeSettings` must be set to `AzureBlobStorageWriteSettings`. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file or blob name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
-| blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, the service automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data is not large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is big enough to store the data. Otherwise, the Copy activity run will fail. | No |
+| blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, the service automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data isn't large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is large enough to store the data. Otherwise, the Copy activity run will fail. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
Assume that you have the following source folder structure and want to copy the
| Sample source structure | Content in FileListToCopy.txt | Configuration | | | |
-| container<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Container: `container`<br>- Folder path: `FolderA`<br><br>**In Copy activity source:**<br>- File list path: `container/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line, with the relative path to the path configured in the dataset. |
+| container<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Container: `container`<br>- Folder path: `FolderA`<br><br>**In Copy activity source:**<br>- File list path: `container/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy. It includes one file per line, with the relative path to the path configured in the dataset. |
### Some recursive and copyBehavior examples
This section describes the resulting behavior of the Copy operation for differen
| true |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the same structure as the source:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | | true |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File5 | | true |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 + File3 + File4 + File5 contents are merged into one file with an autogenerated file name. |
-| false |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/><br/>Subfolder1 with File3, File4, and File5 is not picked up. |
-| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 is not picked up. |
-| false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 is not picked up. |
+| false |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
## Preserving metadata during copy
First, set a wildcard to include all paths that are the partitioned folders plus
:::image type="content" source="media/data-flow/part-file-2.png" alt-text="Screenshot of partition source file settings in mapping data flow source transformation.":::
-Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
+Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service adds the resolved partitions found in each of your folder levels.
:::image type="content" source="media/data-flow/partfile1.png" alt-text="Partition root path":::
Use the **Partition root path** setting to define what the top level of the fold
To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting will be the same folder as your source folder.
-If you have a source path with wildcard, your syntax will look like this:
+If you have a source path with wildcard, your syntax is as follows:
`/data/sales/20??/**/*.csv`
And you can specify "to" as:
In this case, all files that were sourced under `/data/sales` are moved to `/backup/priorSales`. > [!NOTE]
-> File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations *do not* run in Data Flow debug mode.
+> File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations *don't* run in Data Flow debug mode.
-**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.
+**Filter by last modified:** You can filter the files to be processed by specifying a date range of when they were last modified. All datetimes are in UTC.
-**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs.
+**Enable change data capture:** If true, you'll get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs.
:::image type="content" source="media/data-flow/enable-change-data-capture.png" alt-text="Screenshot showing Enable change data capture.":::
In the sink transformation, you can write to either a container or a folder in A
**File name option:** Determines how the destination files are named in the destination folder. The file name options are: - **Default**: Allow Spark to name files based on PART defaults.
- - **Pattern**: Enter a pattern that enumerates your output files per partition. For example, `loans[n].csv` will create `loans1.csv`, `loans2.csv`, and so on.
+ - **Pattern**: Enter a pattern that enumerates your output files per partition. For example, `loans[n].csv` creates `loans1.csv`, `loans2.csv`, and so on.
- **Per partition**: Enter one file name per partition.
- - **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it will be overridden.
+ - **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it is overridden.
- **Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Be aware that the merge operation can possibly fail based on node size. We don't recommend this option for large datasets. **Quote all:** Determines whether to enclose all values in quotation marks.
To learn details about the properties, check [Delete activity](delete-activity.m
| type | The `type` property of the dataset must be set to `AzureBlob`. | Yes | | folderPath | Path to the container and folder in Blob storage. <br/><br/>A wildcard filter is supported for the path, excluding container name. Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character). Use `^` to escape if your folder name has a wildcard or this escape character inside. <br/><br/>An example is: `myblobcontainer/myblobfolder/`. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes for the Copy or Lookup activity, No for the GetMetadata activity | | fileName | Name or wildcard filter for the blobs under the specified `folderPath` value. If you don't specify a value for this property, the dataset points to all blobs in the folder. <br/><br/>For the filter, allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character).<br/>- Example 1: `"fileName": "*.csv"`<br/>- Example 2: `"fileName": "???20180427.txt"`<br/>Use `^` to escape if your file name has a wildcard or this escape character inside.<br/><br/>When `fileName` isn't specified for an output dataset and `preserveHierarchy` isn't specified in the activity sink, the Copy activity automatically generates the blob name with the following pattern: "*Data.[activity run ID GUID].[GUID if FlattenHierarchy].[format if configured].[compression if configured]*". For example: "Data.0a405f8a-93ff-4c6f-b3be-f69616f1df7a.txt.gz". <br/><br/>If you copy from a tabular source by using a table name instead of a query, the name pattern is `[table name].[format].[compression if configured]`. For example: "MyTable.csv". | No |
-| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting will affect the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
-| modifiedDatetimeEnd | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting will affect the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
+| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting affects the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
+| modifiedDatetimeEnd | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting affects the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
| format | If you want to copy files as is between file-based stores (binary copy), skip the format section in both the input and output dataset definitions.<br/><br/>If you want to parse or generate files with a specific format, the following file format types are supported: **TextFormat**, **JsonFormat**, **AvroFormat**, **OrcFormat**, and **ParquetFormat**. Set the **type** property under **format** to one of these values. For more information, see the [Text format](supported-file-formats-and-compression-codecs-legacy.md#text-format), [JSON format](supported-file-formats-and-compression-codecs-legacy.md#json-format), [Avro format](supported-file-formats-and-compression-codecs-legacy.md#avro-format), [Orc format](supported-file-formats-and-compression-codecs-legacy.md#orc-format), and [Parquet format](supported-file-formats-and-compression-codecs-legacy.md#parquet-format) sections. | No (only for binary copy scenario) | | compression | Specify the type and level of compression for the data. For more information, see [Supported file formats and compression codecs](supported-file-formats-and-compression-codecs-legacy.md#compression-support).<br/>Supported types are **GZip**, **Deflate**, **BZip2**, and **ZipDeflate**.<br/>Supported levels are **Optimal** and **Fastest**. | No |
To learn details about the properties, check [Delete activity](delete-activity.m
| Property | Description | Required | |: |: |: | | type | The `type` property of the Copy activity source must be set to `BlobSource`. | Yes |
-| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when `recursive` is set to `true` and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink.<br/>Allowed values are `true` (default) and `false`. | No |
+| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. When `recursive` is set to `true` and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink.<br/>Allowed values are `true` (default) and `false`. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | **Example:**
To learn details about the properties, check [Delete activity](delete-activity.m
## Change data capture
-Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Pleaser refer to [Change Data Capture](concepts-change-data-capture.md) for details.
+Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Please refer to [Change Data Capture](concepts-change-data-capture.md) for details.
. ## Next steps
-For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
databox-online Azure Stack Edge Deploy Aks On Azure Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md
Previously updated : 09/26/2023 Last updated : 09/28/2023 # Customer intent: As an IT admin, I need to understand how to deploy and configure Azure Kubernetes service on Azure Stack Edge.
Depending on the workloads you intend to deploy, you may need to ensure the foll
For more information, see [Create and manage custom locations in Arc-enabled Kubernetes](../azure-arc/kubernetes/custom-locations.md). -- If deploying Kubernetes or PMEC workloads, you may need virtual networks that youΓÇÖve added using the instructions in [Create virtual networks](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node#configure-virtual-network).
+- If deploying Kubernetes or PMEC workloads:
+ - You may have selected a specific workload profile using the local UI or using PowerShell. Detailed steps are documented for the local UI in [Configure compute IPS](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1) and for PowerShell in [Change Kubernetes workload profiles](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles).
+ - You may need virtual networks that youΓÇÖve added using the instructions in [Create virtual networks](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node#configure-virtual-network).
- If you're using HPN VMs as your infrastructure VMs, the vCPUs should be automatically reserved. Run the following command to verify the reservation:
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
Previously updated : 04/14/2022 Last updated : 09/28/2023 # Manage an Azure Stack Edge Pro GPU device via Windows PowerShell
If the compute role is configured on your device, you can also get the compute l
- `Credential`: Provide the username for the network share. When you run this cmdlet, you will need to provide the share password. - `FullLogCollection`: This parameter ensures that the log package will contain all the compute logs. By default, the log package contains only a subset of logs.
+## Change Kubernetes workload profiles
+
+After you have formed and configured a cluster and you have created new virtual switches, you can add or delete virtual networks associated with your virtual switches. For detailed steps, see [Configure virtual switches](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-virtual-switches-1).
+
+After virtual switches are created, you can enable the switches for Kubernetes compute traffic to specify a Kubernets workload profile. To do so using the local UI, use the steps in [Configure compute IPS](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1). To do so using PowerShell, use the following steps:
+
+1. [Connect to the PowerShell interface](#connect-to-the-powershell-interface).
+2. Use the `Get-HcsApplianceInfo` cmdlet to get current `KubernetesPlatform` and `KubernetesWorkloadProfile` settings for your device.
+
+ The following example shows the usage of this cmdlet:
+
+ ```powershell
+ Get-HcsApplianceInfo
+ ```
+
+3. Use the `Set-HcsKubernetesWorkloadProfile` cmdlet to set the workload profile for AP5GC an Azure Private MEC solution.
+
+ The following example shows the usage of this cmdlet:
+
+ ```powershell
+ Set-HcsKubernetesWorkloadProfile -Type "AP5GC"
+ ```
+
+ Here is sample output for this cmdlet:
+
+ ```powershell
+ [10.100.10.10]: PS>KubernetesPlatform : AKS
+ [10.100.10.10]: PS>KubernetesWorkloadProfile : AP5GC
+ [10.100.10.10]: PS>
+ ```
## Change Kubernetes pod and service subnets
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 09/22/2023 Last updated : 09/28/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
After the virtual switches are created, you can enable the switches for Kubernetes compute traffic. 1. In the local UI, go to the **Kubernetes** page.
-1. Specify a workload from the options provided. If prompted, confirm the option you selected and then select **Apply**.
+1. Specify a workload from the options provided.
+ - If you are working with an Azure Private MEC solution, select the option for **an Azure Private MEC solution in your environment**.
+ - If you are working with an SAP Digital Manufacturing solution or another Microsoft partner solution, select the option for **a SAP Digital Manufacturing for Edge Computing or another Microsoft partner solution in your environment**.
+ - For other workloads, select the option for **other workloads in your environment**.
+
+ If prompted, confirm the option you specified and then select **Apply**.
+
+ To use PowerShell to specify the workload, see detailed steps in [Change Kubernetes workload profiles](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles).
![Screenshot of the Workload selection options on the Kubernetes page of the local UI for two node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/azure-stack-edge-kubernetes-workload-selection.png)
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
Previously updated : 06/16/2022 Last updated : 09/29/2023 #Customer intent: As an IT admin, I need to be able to export data from Azure to another location, such as, another cloud provider or my location.
To use an XML file to export your data:
![Select Export option, Containers](media/data-box-deploy-export-ordered/azure-data-box-export-sms-use-xml-file-containers-option.png)
-3. In **New Container** tab that pops out from the right side of the Azure portal, add a name for the container. The name must be lower-case and you may include numbers and dashes '-'. Then select the **Public access level** from the drop-down list box. We recommend that you choose **Private (non anonymous access)** to prevent others from accessing your data. For more information regarding container access levels, see [Container access permissions](../storage/blobs/anonymous-read-access-configure.md#set-the-public-access-level-for-a-container).
+3. In **New Container** tab that pops out from the right side of the Azure portal, add a name for the container. The name must be lower-case and you may include numbers and dashes '-'. Then select the **Public access level** from the drop-down list box. We recommend that you choose **Private (non anonymous access)** to prevent others from accessing your data. For more information regarding container access levels, see [Container access permissions](../storage/blobs/anonymous-read-access-configure.md#set-the-anonymous-access-level-for-a-container).
![Select Export option, New container settings](media/data-box-deploy-export-ordered/azure-data-box-export-sms-use-xml-file-container-settings.png)
defender-for-cloud Defender For Apis Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-deploy.md
Title: Protect your APIs with Defender for APIs (Preview)
+ Title: Protect your APIs with Defender for APIs
description: Learn about deploying the Defender for APIs plan in Defender for Cloud
defender-for-cloud Defender For Apis Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-manage.md
There are three types of APIs you can query:
- **API Endpoints** - A group of all types of API endpoints. -- **API Management** services - API management services are platforms that provide tools and infrastructure for managing APIs, typically through a web-based interface. They often include features such as: API gateway, API portal, API analytics and API security.
+- **API Management services** - API management services are platforms that provide tools and infrastructure for managing APIs, typically through a web-based interface. They often include features such as: API gateway, API portal, API analytics and API security.
**To query APIs in the cloud security graph**:
defender-for-cloud Defender For Apis Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-posture.md
Last updated 05/08/2023
# Investigate API findings, recommendations, and alerts
-This article describes how to investigate API security findings, alerts, and security posture recommendations for APIs protected by [Microsoft Defender for APIs](defender-for-apis-introduction.md). Defender for APIs is currently in preview.
+This article describes how to investigate API security findings, alerts, and security posture recommendations for APIs protected by [Microsoft Defender for APIs](defender-for-apis-introduction.md).
## Before you start
When the Defender CSPM plan is enabled together with Defender for APIs, you can
1. In the Defender for Cloud portal, select **Cloud Security Explorer**. 1. In **What would you like to search?** select the **APIs** category. 1. Review the search results so that you can review, prioritize, and fix any API issues.
+1. Alternatively, you can select one of the templated API queries to see high risk issues like **Internet exposed API endpoints with sensitive data** or **APIs communicating over unencrypted protocols with unauthenticated API endpoints**
## Next steps
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
Review the latest cloud support information for Defender for Cloud plans and fea
Availability | This feature is available in the Premium, Standard, Basic, and Developer tiers of Azure API Management. API gateways | Azure API Management<br/><br/> Defender for APIs currently doesn't onboard APIs that are exposed using the API Management [self-hosted gateway](../api-management/self-hosted-gateway-overview.md), or managed using API Management [workspaces](../api-management/workspaces-overview.md). API types | Currently, Defender for APIs discovers and analyzes REST APIs.
-Multi-region support | In multi-region Azure API Management instances, some ML-based detections and security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions. In such cases, data residency requirements are still met.ΓÇ»
+Multi-region support | In multi-regional managed and self-hosted Azure API Management deployments, security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions. In such cases, data residency requirements are still met.ΓÇ»
## Defender CSPM integration
defender-for-cloud Defender For Apis Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-validation.md
This page will walk you through the steps to trigger an alert for one of your AP
1. In the key field enter **User-Agent**.
-1. In the value field enter **jvascript:**.
+1. In the value field enter **javascript:**.
:::image type="content" source="media/defender-for-apis-validation/postman-keys.png" alt-text="Screenshot that shows where to enter the keys and their values in Postman.":::
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
Code: EnvironmentNotFound
Message: The environment resource was not found. ```
-To resolve the issue, assign the correct permissions: [Give project access to the development team](quickstart-create-and-configure-projects.md#give-project-access-to-the-development-team).
+To resolve the issue, assign the correct permissions: [Give access to the development team](quickstart-create-and-configure-projects.md#give-access-to-the-development-team).
## Access an environment
deployment-environments How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-request-quota-increase.md
+
+ Title: Request a quota limit increase for Azure Deployment Environments resources
+description: Learn how to request a quota increase to extend the number of Deployment Environments resources you can use in your subscription.
+++++ Last updated : 09/27/2023++
+# Request a quota limit increase for Azure Deployment Environments resources
+
+This article describes how to submit a support request for increasing the number of resources available to Azure Deployment Environments in your Azure subscription.
+
+If your organization uses Deployment Environments extensively, you may encounter a quota limit during deployment. When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Azure Deployment Environments team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
+
+Learn more about the general [process for creating Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+## Prerequisites
+
+- To create a support request, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Support Request Contributor](/azure/role-based-access-control/built-in-roles#support-request-contributor) role at the subscription level.
+- Before you create a support request for a limit increase, you need to gather additional information.
+
+## Gather information for your request
+
+Submitting a support request for additional quota is quicker if you gather the required information before you begin the request process.
+
+- **Identify the quota type**
+
+ If you reach the quota limit for a Deployment Environments resource, you see a notification indicating which quota type is affected during deployment. Take note of it and submit a request for that quota type.
+
+ The following resources are limited by subscription.
+
+ - Runtime limit per month (mins)
+ - Runtime limit per deployment (mins)
+ - Storage limit per environment (GBs)
++
+- **Determine the region for the additional quota**
+
+ Deployment Environments resources can exist in many regions. You should choose the region where your Deployment Environments Project exists for best performance.
+
+ For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+## Submit a new support request
+
+Follow these steps to request a limit increase:
+
+1. On the Azure portal home page, select Support & troubleshooting, and then select **Help + support**
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/submit-new-request.png" alt-text="Screenshot of the Azure portal home page, highlighting the Request core limit increase button." lightbox="./media/how-to-request-capacity-increase/submit-new-request.png":::
+
+1. On the **Help + support** page, select **Create a support request**.
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/create-support-request.png" alt-text="Screenshot of the Help + support page, highlighting Create a support request." lightbox="./media/how-to-request-capacity-increase/create-support-request.png":::
+
+1. On the **New support request** page, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Issue type** | *Service and subscription limits (quotas)* |
+ | **Subscription** | Select the subscription to which the request applies. |
+ | **Quota type** | *Azure Deployment Environments* |
+
+1. On the **Additional details** tab, in the **Problem details** section, select **Enter details**.
+
+ :::image type="content" source="media/how-to-request-capacity-increase/enter-details.png" alt-text="Screenshot of the New support request page, highlighting Enter details." lightbox="media/how-to-request-capacity-increase/enter-details.png":::
+
+1. In **Quota details**, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Quota type** | Select the **Quota type** that you want to increase. |
+ | **Region** | Select the **Region** in which you want to increase your quota. |
+ | **Additional quota** | Enter the additional number of minutes that you need, or GBs per environment for Storage limit increases. |
+ | **Additional info** | Enter any extra information about your request. |
+
+ :::image type="content" source="media/how-to-request-capacity-increase/quota-details.png" alt-text="Screenshot of the Quota details pane." lightbox="media/how-to-request-capacity-increase/quota-details.png":::
+
+1. Select **Save and continue**.
+## Complete the support request
+
+To complete the support request, enter the following information:
+
+1. Complete the remainder of the support request **Additional details** tab using the following information:
+
+ ### Advanced diagnostic information
+
+ |Name |Value |
+ |||
+ |**Allow collection of advanced diagnostic information**|Select yes or no.|
+
+ ### Support method
+
+ |Name |Value |
+ |||
+ |**Support plan**|Select your support plan.|
+ |**Severity**|Select the severity of the issue.|
+ |**Preferred contact method**|Select email or phone.|
+ |**Your availability**|Enter your availability.|
+ |**Support language**|Select your language preference.|
+
+ ### Contact information
+
+ |Name |Value |
+ |||
+ |**First name**|Enter your first name.|
+ |**Last name**|Enter your last name.|
+ |**Email**|Enter your contact email.|
+ |**Additional email for notification**|Enter an email for notifications.|
+ |**Phone**|Enter your contact phone number.|
+ |**Country/region**|Enter your location.|
+ |**Save contact changes for future support requests.**|Select the check box to save changes.|
+
+1. Select **Next**.
+
+1. On the **Review + create** tab, review the information, and then select **Create**.
+
+## Related content
+
+- Check the default quota for each resource type by subscription type: [Azure Deployment Environments limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-deployment-environments-limits)
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 04/25/2023 Last updated : 09/06/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments
A platform engineering team typically sets up a dev center, attaches external ca
The following diagram shows the steps you perform in this quickstart to configure a dev center for Azure Deployment Environments in the Azure portal. -
-First, you create a dev center to organize your deployment environments resources. Next, you create a key vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Then, you attach an identity to the dev center and assign that identity access to the key vault. Then, you add a catalog that stores your IaC templates to the dev center. Finally, you create environment types to define the types of environments that development teams can create.
--
-The following diagram shows the steps you perform in the [Create and configure a project quickstart](quickstart-create-and-configure-projects.md) to configure a project associated with a dev center for Deployment Environments.
- You need to perform the steps in both quickstarts before you can create a deployment environment.
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
-## Create a Key Vault
-You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. In this quickstart, you create an RBAC Key Vault. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
-
-If you don't have an existing key vault, use the following steps to create one:
+### Create a Key Vault
+When you are using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the Search box, enter *Key Vault*.
-1. From the results list, select **Key Vault**.
-1. On the Key Vault page, select **Create**.
-1. On the Create key vault tab, provide the following information:
+If you don't have an existing key vault, use the following steps to create one: [Quickstart: Create a key vault using the Azure portal](/azure/key-vault/general/quick-create-portal).
- |Name |Value |
- |-|--|
- |**Name**|Enter a name for the key vault.|
- |**Subscription**|Select the subscription in which you want to create the key vault.|
- |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.|
- |**Location**|Select the location or region where you want to create the key vault.|
-
- Leave the other options at their defaults.
-
-1. On the Access configuration tab, select **Azure role-based access control**, and then select **Review + create**.
-
-1. On the Review + create tab, select **Create**.
-
-## Create a personal access token
+### Configure a personal access token
Using an authentication token like a GitHub PAT enables you to share your repository securely. GitHub offers classic PATs, and fine-grained PATs. Fine-grained and classic PATs work with Azure Deployment Environments, but fine-grained tokens give you more granular control over the repositories to which you're allowing access. > [!TIP]
Using an authentication token like a GitHub PAT enables you to share your reposi
- Select **Create**. 1. Leave this tab open, you need to come back to the Key Vault later.
-## Attach an identity to the dev center
+## Configure a managed identity for the dev center
After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-In this quickstart, you configure a system-assigned managed identity for your dev center.
+In this quickstart, you configure a system-assigned managed identity for your dev center. You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the key vault secret that contains the GitHub PAT.
### Attach a system-assigned managed identity
To attach a system-assigned managed identity to your dev center:
1. In the **Enable system assigned managed identity** dialog, select **Yes**.
-### Assign the system-assigned managed identity access to the key vault secret
-Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository. Key Vaults support two methods of access; Azure role-based access control (RBAC) or Vault access policy. In this quickstart, you use an RBAC key vault.
+### Assign roles for the dev center managed identity
-Configure vault access:
-1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the key vault secret that stores your GitHub PAT.
-1. In the left menu, select **Access control (IAM)**.
+1. Navigate to your dev center.
+1. On the left menu under Settings, select **Identity**.
+1. Under System assigned > Permissions, select **Azure role assignments**.
-1. Select **Add** > **Add role assignment**.
+ :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
- | Setting | Value |
- | | |
- | **Role** | Select **Key Vault Secrets User**. |
- | **Assign access to** | Select **Managed identity**. |
- | **Members** | Select the dev center managed identity that you created in [Attach a system-assigned managed identity](#attach-a-system-assigned-managed-identity). |
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|Owner|
+
+1. To give access to the key vault, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Key Vault|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Resource**|Select the key vault that you created earlier.|
+ |**Role**|Key Vault Secrets User|
## Add a catalog to the dev center Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
You also need the path to the secret you created in the key vault.
| **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` | | **Branch** | Enter the repository branch to connect to.<br />*Sample catalog example:* `main`| | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders that hold your environment definitions. <br /> The folder path is for the folder with subfolders containing environment definition manifests, not for the folder with the environment definition manifest itself. The following image shows the sample catalog folder structure.<br />*Sample catalog example:* `/Environments`<br /> :::image type="content" source="media/how-to-configure-catalog/github-folders.png" alt-text="Screenshot showing Environments sample folder in GitHub."::: The folder path can begin with or without a forward slash (`/`).|
- | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+ | **Secret identifier**| Enter the [secret identifier](#configure-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
:::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Previously updated : 04/25/2023 Last updated : 09/06/2023 # Quickstart: Create and configure a project
-This quickstart shows you how to create a project in Azure Deployment Environments. Then, you associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
+This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
-A platform engineering team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
-
-The following diagram shows the steps you perform in the [Create and configure a dev center for Azure Deployment Environments](quickstart-create-and-configure-devcenter.md) quickstart to configure a dev center for Azure Deployment Environments in the Azure portal. You must perform these steps before you can create a project.
-
-
The following diagram shows the steps you perform in this quickstart to configure a project associated with a dev center for Deployment Environments in the Azure portal. First, you create a project. Then, assign the dev center managed identity the Owner role to the subscription. Then, you configure the project by creating a project environment type. Finally, you give the development team access to the project by assigning the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role to the project. You need to perform the steps in both quickstarts before you can create a deployment environment.
-For more information on how to create an environment, see [Quickstart: Create and access Azure Deployment Environments by using the developer portal](quickstart-create-access-environments.md).
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- An Azure Deployment Environments dev center with a catalog attached. If you don't have a dev center with a catalog, see [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
## Create a project
To create a project in your dev center:
1. On the **Review + Create** tab, wait for deployment validation, and then select **Create**.
- :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-review-create.png" alt-text="Screenshot that shows selecting the Review + Create button to validate and create a project.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/create-project.png" alt-text="Screenshot that shows selecting the create project basics tab.":::
1. Confirm that the project was successfully created by checking your Azure portal notifications. Then, select **Go to resource**.
To create a project in your dev center:
:::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane.":::
-### Assign a managed identity the owner role to the subscription
-Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
-
-In this quickstart you assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
-
-1. Navigate to your dev center.
-1. On the left menu under Settings, select **Identity**.
-1. Under System assigned > Permissions, select **Azure role assignments**.
-
- :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
-
-1. In Azure role assignments, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
- |Name |Value |
- ||-|
- |**Scope**|Subscription|
- |**Subscription**|Select the subscription in which to use the managed identity.|
- |**Role**|Owner|
-
-## Configure a project
+## Create a project environment type
To configure a project, add a [project environment type](how-to-configure-project-environment-types.md):
To configure a project, add a [project environment type](how-to-configure-projec
> [!NOTE] > At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Owner role](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
-## Give project access to the development team
+## Give access to the development team
1. In the Azure portal, go to your project.
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
Last updated 08/21/2023
-# Determine resource usage and quota
+# Determine resource usage and quota for Microsoft Dev Box
To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
-## Determine your usage and quota
+## Determine your Dev Box usage and quota by subscription
1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
dev-box How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md
Last updated 08/22/2023
-# Request a quota limit increase
+# Request a quota limit increase for Microsoft Dev Box resources
This article describes how to submit a support request for increasing the number of resources for Microsoft Dev Box in your Azure subscription. When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Microsoft Dev Box team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
-The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often, but to ensure you have the resources you require when you need them, you should:
+The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often. To ensure you have the resources you require when you need them, you should:
- Request capacity as far in advance as possible. - If possible, be flexible on the region where you're requesting capacity.
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
With self-serve scalable clusters, you can purchase up to 10 CUs for a cluster i
If you need a cluster larger than 10 CU, you can [submit a support request](event-hubs-dedicated-cluster-create-portal.md#submit-a-support-request) to scale up your cluster after its creation. > [!IMPORTANT]
-> Self-serve scalable Dedicated can be deployed with [availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) enabled with 3 CUs but you won't be able to use the self-serve scaling capability to scale the cluster. You must instead [submit a support request](event-hubs-dedicated-cluster-create-portal.md#submit-a-support-request) to scale the AZ enabled cluster.
+> Self-serve scalable Dedicated can be deployed with [availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) enabled with 3 CUs but you won't be able to use the self-serve scaling capability to scale the cluster. To create or scale an AZ enabled self-serve cluster you must [submit a support request](event-hubs-dedicated-cluster-create-portal.md#submit-a-support-request).
### Legacy clusters Event Hubs Dedicated clusters created prior to the availability of self-serve scalable clusters are referred to as legacy clusters.
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
{ // if it is too large for the batch throw new Exception($"Event {i} is too large for the batch and cannot be sent.");
- Console.ReadLine();
} }
hdinsight Hbase Troubleshoot Hbase Hbck Inconsistencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-hbase-hbck-inconsistencies.md
Title: hbase hbck returns inconsistencies in Azure HDInsight
description: hbase hbck returns inconsistencies in Azure HDInsight Previously updated : 08/28/2022 Last updated : 09/19/2023 # Scenario: `hbase hbck` command returns inconsistencies in Azure HDInsight
Varies.
## Issue: Region is offline
-Region xxx not deployed on any RegionServer. This means the region is in `hbase:meta`, but offline.
+Region xxx not deployed on any RegionServer. It means the region is in `hbase:meta`, but offline.
### Cause
Bring regions online by running:
hbase hbck -ignorePreCheckPermission ΓÇôfixAssignment ```
-Alternatively, run `assign <region-hash>` on hbase-shell to force to assign this region
+Alternatively, run `assign <region-hash>` on hbase-shell to force assign this region
Varies.
### Resolution
-Manually merge those overlapped regions. Go to HBase HMaster Web UI table section, select the table link, which has the issue. You will see start key/end key of each region belonging to that table. Then merge those overlapped regions. In HBase shell, do `merge_region 'xxxxxxxx','yyyyyyy', true`. For example:
+Manually merge those overlapped regions. Go to HBase HMaster Web UI table section, select the table link, which has the issue. You see start key/end key of each region belonging to that table. Then merge those overlapped regions. In HBase shell, do `merge_region 'xxxxxxxx','yyyyyyy', true`. For example:
``` RegionA, startkey:001, endkey:010,
Can't load `.regioninfo` for region `/hbase/data/default/tablex/regiony`.
### Cause
-This is most likely due to region partial deletion when RegionServer crashes or VM reboots. Currently, the Azure Storage is a flat blob file system and some file operations are not atomic.
+It is most likely due to region partial deletion when RegionServer crashes or VM reboots. Currently, the Azure Storage is a flat blob file system and some file operations are not atomic.
### Resolution
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md
description: Learn how to create HDInsight clusters with your own custom Apache
Previously updated : 08/16/2022 Last updated : 09/29/2023 # Set up HDInsight clusters with a custom Ambari DB
The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari
The remainder of this article discusses the following points: - requirements to use the custom Ambari DB feature-- the steps necessary to provision HDInsight clusters using your own external database for Apache Ambari
+- the steps necessary to provision HDInsight cluster using your own external database for Apache Ambari
## Custom Ambari DB requirements
The custom Ambari DB has the following other requirements:
- You must have an existing Azure SQL DB server and database. - The database that you provide for Ambari setup must be empty. There should be no tables in the default dbo schema. - The user used to connect to the database should have SELECT, CREATE TABLE, and INSERT permissions on the database.-- Turn on the option to [Allow access to Azure services](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#azure-portal-steps) on the server where you will host Ambari.
+- Turn on the option to [Allow access to Azure services](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#azure-portal-steps) on the server where you host Ambari.
- Management IP addresses from HDInsight service need to be allowed in the firewall rule. See [HDInsight management IP addresses](hdinsight-management-ip-addresses.md) for a list of the IP addresses that must be added to the server-level firewall rule. When you host your Apache Ambari DB in an external database, remember the following points: -- You're responsible for the additional costs of the Azure SQL DB that holds Ambari.
+- You're responsible for the extra costs of the Azure SQL DB that holds Ambari.
- Back up your custom Ambari DB periodically. Azure SQL Database generates backups automatically, but the backup retention time-frame varies. For more information, see [Learn about automatic SQL Database backups](/azure/azure-sql/database/automated-backups-overview). - Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It is not supported.
When you host your Apache Ambari DB in an external database, remember the follow
To create an HDInsight cluster that uses your own external Ambari database, use the [custom Ambari DB Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-custom-ambari-db).
-Edit the parameters in the `azuredeploy.parameters.json` to specify information about your new cluster and the database that will hold Ambari.
+Edit the parameters in the `azuredeploy.parameters.json` to specify information about your new cluster and the database that holds Ambari.
You can begin the deployment using the Azure CLI. Replace `<RESOURCEGROUPNAME>` with the resource group where you want to deploy your cluster.
hdinsight Hdinsight Hadoop Collect Debug Heap Dump Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-collect-debug-heap-dump-linux.md
description: Enable heap dumps for Apache Hadoop services from Linux-based HDIns
Previously updated : 07/19/2022 Last updated : 09/19/2023 # Enable heap dumps for Apache Hadoop services on Linux-based HDInsight
hdinsight Hdinsight Migrate Granular Access Cluster Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md
Title: Granular role-based access Azure HDInsight cluster configurations
description: Learn about the changes required as part of the migration to granular role-based access for HDInsight cluster configurations. Previously updated : 06/29/2022 Last updated : 09/19/2023 # Migrate to granular role-based access for cluster configurations
Previously, secrets could be obtained via the HDInsight API by cluster users
possessing the Owner, Contributor, or Reader [Azure roles](../role-based-access-control/rbac-and-directory-admin-roles.md), as they were available to anyone with the `*/read` permission. Secrets are defined as values that could be used to obtain more elevated access than a user's role should allow. These include values such as cluster gateway HTTP credentials, storage account keys, and database credentials.
-Beginning on September 3, 2019, accessing these secrets will require the `Microsoft.HDInsight/clusters/configurations/action` permission, meaning they can no longer be accessed by users with the Reader role. The roles that have this permission are Contributor, Owner, and the new HDInsight Cluster Operator role (more on that below).
+Beginning on September 3, 2019, accessing these secrets will require the `Microsoft.HDInsight/clusters/configurations/action` permission, user cannot access it with the Reader role. The roles that have this permission are Contributor, Owner, and the new HDInsight Cluster Operator role.
-We are also introducing a new [HDInsight Cluster Operator](../role-based-access-control/built-in-roles.md#hdinsight-cluster-operator) role
-that will be able to retrieve secrets without being granted the administrative
-permissions of Contributor or Owner. To summarize:
+We are also introducing a new [HDInsight Cluster Operator](../role-based-access-control/built-in-roles.md#hdinsight-cluster-operator) role that able to retrieve secrets without being granted the administrative permissions of Contributor or Owner. To summarize:
| Role | Previously | Going Forward | ||--|--|
The following entities and scenarios are affected:
- [API](#api): Users using the `/configurations` or `/configurations/{configurationName}` endpoints. - [Azure HDInsight Tools for Visual Studio Code](#azure-hdinsight-tools-for-visual-studio-code) version 1.1.1 or below. - [Azure Toolkit for IntelliJ](#azure-toolkit-for-intellij) version 3.20.0 or below.-- [Azure Data Lake and Stream Analytics Tools for Visual Studio](#azure-data-lake-and-stream-analytics-tools-for-visual-studio) below version 2.3.9000.1.
+- [Azure Data Lake and Stream Analytics Tools for Visual Studio](#azure-data-lake-and-stream-analytics-tools-for-visual-studio) version 2.3.9000.1.
- [Azure Toolkit for Eclipse](#azure-toolkit-for-eclipse) version 3.15.0 or below. - [SDK for .NET](#sdk-for-net) - [versions 1.x or 2.x](#versions-1x-and-2x): Users using the `GetClusterConfigurations`, `GetConnectivitySettings`, `ConfigureHttpSettings`, `EnableHttp` or `DisableHttp` methods from the ConfigurationsOperationsExtensions class.
The following entities and scenarios are affected:
- [SDK for Python](#sdk-for-python): Users using the `get` or `update` methods from the `ConfigurationsOperations` class. - [SDK for Java](#sdk-for-java): Users using the `update` or `get` methods from the `ConfigurationsInner` class. - [SDK for Go](#sdk-for-go): Users using the `Get` or `Update` methods from the `ConfigurationsClient` struct.-- [Az.HDInsight PowerShell](#azhdinsight-powershell) below version 2.0.0.
+- [Az.HDInsight PowerShell](#azhdinsight-powershell) version 2.0.0.
See the below sections (or use the above links) to see the migration steps for your scenario. ### API
-The following APIs will be changed or deprecated:
+The following APIs are changed or deprecated:
- [**GET /configurations/{configurationName}**](/rest/api/hdinsight/hdinsight-cluster#get-configuration) (sensitive information removed) - Previously used to obtain individual configuration types (including secrets).
If you are using version 3.15.0 or below, update to the [latest version of the A
Update to [version 2.1.0](https://www.nuget.org/packages/Microsoft.Azure.Management.HDInsight/2.1.0) of the HDInsight SDK for .NET. Minimal code modifications may be required if you are using a method affected by these changes: - `ClusterOperationsExtensions.GetClusterConfigurations` will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use `ClusterOperationsExtensions.ListConfigurations` going forward. Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use `ClusterOperationsExtensions.ListConfigurations` going forward. Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use `ClusterOperationsExtensions.GetGatewaySettings`. - `ClusterOperationsExtensions.GetConnectivitySettings` is now deprecated and has been replaced by `ClusterOperationsExtensions.GetGatewaySettings`.
Update to [version 2.1.0](https://www.nuget.org/packages/Microsoft.Azure.Managem
Update to [version 5.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.HDInsight/5.0.0) or later of the HDInsight SDK for .NET. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationOperationsExtensions.Get`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.get) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use [`ConfigurationOperationsExtensions.List`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.list) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use [`ConfigurationOperationsExtensions.List`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.list) going forward.ΓÇ» Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use [`ClusterOperationsExtensions.GetGatewaySettings`](/dotnet/api/microsoft.azure.management.hdinsight.clustersoperationsextensions.getgatewaysettings). - [`ConfigurationsOperationsExtensions.Update`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.update) is now deprecated and has been replaced by [`ClusterOperationsExtensions.UpdateGatewaySettings`](/dotnet/api/microsoft.azure.management.hdinsight.clustersoperationsextensions.updategatewaysettings). - [`ConfigurationsOperationsExtensions.EnableHttp`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.enablehttp) and [`DisableHttp`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.disablehttp) are now deprecated. HTTP is now always enabled, so these methods are no longer needed.
Update to [version 5.0.0](https://www.nuget.org/packages/Microsoft.Azure.Managem
Update to [version 1.0.0](https://pypi.org/project/azure-mgmt-hdinsight/1.0.0/) or later of the HDInsight SDK for Python. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationsOperations.get`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#get-resource-group-name--cluster-name--configuration-name--custom-headers-none--raw-false-operation-config-) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsOperations.list`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#list-resource-group-name--cluster-name--custom-headers-none--raw-false-operation-config-) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsOperations.list`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#list-resource-group-name--cluster-name--custom-headers-none--raw-false-operation-config-) going forward.ΓÇ» Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use [`ClusterOperations.get_gateway_settings`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.clustersoperations#get-gateway-settings-resource-group-name--cluster-name--custom-headers-none--raw-false-operation-config-). - [`ConfigurationsOperations.update`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#update-resource-group-name--cluster-name--configuration-name--parameters--custom-headers-none--raw-false--polling-true-operation-config-) is now deprecated and has been replaced by [`ClusterOperations.update_gateway_settings`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.clustersoperations#update-gateway-settings-resource-group-name--cluster-name--parameters--custom-headers-none--raw-false--polling-true-operation-config-).
Update to [version 1.0.0](https://search.maven.org/artifact/com.microsoft.azure.
Update to [version 27.1.0](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/hdinsight) or later of the HDInsight SDK for Go. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationsClient.get`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.Get) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsClient.list`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.List) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsClient.list`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.List) going forward. Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use [`ClustersClient.get_gateway_settings`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ClustersClient.GetGatewaySettings). - [`ConfigurationsClient.update`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.Update) is now deprecated and has been replaced by [`ClustersClient.update_gateway_settings`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ClustersClient.UpdateGatewaySettings).
Update to [version 27.1.0](https://github.com/Azure/azure-sdk-for-go/tree/main/s
Update to [Az PowerShell version 2.0.0](https://www.powershellgallery.com/packages/Az) or later to avoid interruptions. Minimal code modifications may be required if you are using a method affected by these changes. - `Grant-AzHDInsightHttpServicesAccess` is now deprecated and has been replaced by the new `Set-AzHDInsightGatewayCredential` cmdlet. - `Get-AzHDInsightJobOutput` has been updated to support granular role-based access to the storage key.
- - Users with HDInsight Cluster Operator, Contributor, or Owner roles will not be affected.
- - Users with only the Reader role will need to specify the `DefaultStorageAccountKey` parameter explicitly.
+ - Users with HDInsight Cluster Operator, Contributor, or Owner roles are not affected.
+ - Users with only the Reader role need to specify the `DefaultStorageAccountKey` parameter explicitly.
- `Revoke-AzHDInsightHttpServicesAccess` is now deprecated. HTTP is now always enabled, so this cmdlet is no longer needed. See the [az.HDInsight migration guide](https://github.com/Azure/azure-powershell/blob/master/documentation/migration-guides/Az.2.0.0-migration-guide.md#azhdinsight) for more details.
A user with the [Owner](../role-based-access-control/built-in-roles.md#owner) ro
The simplest way to add this role assignment is by using the `az role assignment create` command in Azure CLI. > [!NOTE]
-> This command must be run by a user with the Owner role, as only they can grant these permissions. The `--assignee` is the name of the service principal or email address of the user to whom you want to assign the HDInsight Cluster Operator role. If you receive an insufficient permissions error, see the FAQ below.
+> This command must be run by a user with the Owner role, as only they can grant these permissions. The `--assignee` is the name of the service principal or email address of the user to whom you want to assign the HDInsight Cluster Operator role. If you receive an insufficient permissions error, see the FAQ.
#### Grant role at the resource (cluster) level
Cluster configurations are now behind granular role-based access control and req
In addition to having the Owner role, the user or service principal executing the command needs to have sufficient Azure AD permissions to look up the object IDs of the assignee. This message indicates insufficient Azure AD permissions. Try replacing the `-ΓÇôassignee` argument with `ΓÇôassignee-object-id` and provide the object ID of the assignee as the parameter instead of the name (or the principal ID in the case of a managed identity). See the optional parameters section of the [az role assignment create documentation](/cli/azure/role/assignment#az-role-assignment-create) for more info.
-If this still doesn't work, contact your Azure AD admin to acquire the correct permissions.
+If it still does not work, contact your Azure AD admin to acquire the correct permissions.
### What will happen if I take no action? Beginning on September 3, 2019, `GET /configurations` and `POST /configurations/gateway` calls will no longer return any information and the `GET /configurations/{configurationName}` call will no longer return sensitive parameters, such as storage account keys or the cluster password. The same is true of corresponding SDK methods and PowerShell cmdlets.
-If you are using an older version of one of the tools for Visual Studio, VSCode, IntelliJ or Eclipse mentioned above, they will no longer function until you update.
+If you are using an older version of one of the tools for Visual Studio, VSCode, IntelliJ or Eclipse mentioned, it is no longer function until you update.
For more detailed information, see the corresponding section of this document for your scenario.
hdinsight Network Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/network-virtual-appliance.md
Title: Configure network virtual appliance in Azure HDInsight
-description: Learn how to configure a number of additional features for your network virtual appliance in Azure HDInsight.
+description: Learn how to configure extra features for your network virtual appliance in Azure HDInsight.
Previously updated : 08/30/2022 Last updated : 09/20/2023 # Configure network virtual appliance in Azure HDInsight
Last updated 08/30/2022
> [!Important] > The following information is **only** required if you wish to configure a network virtual appliance (NVA) other than [Azure Firewall](./hdinsight-restrict-outbound-traffic.md).
-Azure Firewall FQDN tag is automatically configured to allow traffic for many of the common important FQDNs. Using another network virtual appliance will require you to configure a number of additional features. Keep the following factors in mind as you configure your network virtual appliance:
+Azure Firewall FQDN tag is automatically configured to allow traffic for many of the common important FQDNs. Using another network virtual appliance requires you to configure extra features. Keep the following factors in mind as you configure your network virtual appliance:
* Service Endpoint capable services can be configured with service endpoints that results in bypassing the NVA, usually for cost or performance considerations. * If ResourceProviderConnection is set to *outbound*, you can use private endpoints for the storage and SQL servers for metastores and there is no need to add them to the NVA.
Azure Firewall FQDN tag is automatically configured to allow traffic for many of
## Service endpoint capable dependencies
-You can optionally enable one or more of the following service endpoints which will result in bypassing the NVA. This option can be useful for large amounts of data transfers to save on cost and also for performance optimizations.
+You can optionally enable one or more of the following service endpoints, which result in bypassing the NVA. This option can be useful for large amounts of data transfers to save on cost and also for performance optimizations.
| **Endpoint** | ||
You can optionally enable one or more of the following service endpoints which w
| **Endpoint** | **Details** | |||
-| IPs published [here](hdinsight-management-ip-addresses.md) | These IPs are for HDInsight resource provider and should be included in the UDR to avoid asymmetric routing. This rule is only needed if the ResourceProviderConnection is set to *Inbound*. If the ResourceProviderConnection is set to *Outbound* then these IPs are not needed in the UDR. |
-| AAD-DS private IPs | Only needed for ESP clusters, if the VNETs are not peered.|
+| IPs published [here](hdinsight-management-ip-addresses.md) | These IPs are for HDInsight resource provider and should be included in the UDR to avoid asymmetric routing. This rule is only needed if the ResourceProviderConnection is set to *Inbound*. If the ResourceProviderConnection is set to *Outbound*, then these IPs are not needed in the UDR. |
+| AAD-DS private IPs | Only need for ESP clusters, if the VNETs are not peered.|
### FQDN HTTP/HTTPS dependencies
-You can get the list of dependent FQDNs (mostly Azure Storage and Azure Service Bus) for configuring your network virtual appliance [in this repo](https://github.com/Azure-Samples/hdinsight-fqdn-lists/). For the regional list see [here](https://github.com/Azure-Samples/hdinsight-fqdn-lists/tree/main/Public). These dependencies are used by HDInsight resource provider(RP) to create and monitor/manage clusters successfully. These include telemetry/diagnostic logs, provisioning metadata, cluster-related configurations, scripts, etc. This FQDN dependency list might change with releasing future HDInsight updates.
+You can get the list of dependent FQDNs (mostly Azure Storage and Azure Service Bus) for configuring your network virtual appliance [in this repo](https://github.com/Azure-Samples/hdinsight-fqdn-lists/). For the regional list, see [here](https://github.com/Azure-Samples/hdinsight-fqdn-lists/tree/main/Public). These dependencies are used by HDInsight resource provider(RP) to create and monitor/manage clusters successfully. These include telemetry/diagnostic logs, provisioning metadata, cluster-related configurations, scripts, etc. This FQDN dependency list might change with releasing future HDInsight updates.
-The list below only gives a few FQDNs that may be needed for OS and security patching or certificate validations during the cluster create process and during the lifetime of cluster operations:
+The following list gives a few FQDNs that may be needed for OS and security patching or certificate validations during the cluster create process and during the lifetime of cluster operations:
| **Runtime Dependencies FQDNs** | ||
hdinsight Troubleshoot Debug Wasb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/troubleshoot-debug-wasb.md
Title: Debug WASB file operations in Azure HDInsight
description: Describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. Previously updated : 07/19/2022 Last updated : 09/19/2023 # Debug WASB file operations in Azure HDInsight There are times when you may want to understand what operations the WASB driver started with Azure Storage. For the client side, the WASB driver produces logs for each file system operation at **DEBUG** level. WASB driver uses log4j to control logging level and the default is **INFO** level. For Azure Storage server-side analytics logs, see [Azure Storage analytics logging](../../storage/common/storage-analytics-logging.md).
-A produced log will look similar to:
+A produced log looks similar to:
```log 18/05/13 04:15:55 DEBUG NativeAzureFileSystem: Moving wasb://xxx@yyy.blob.core.windows.net/user/livy/ulysses.txt/_temporary/0/_temporary/attempt_20180513041552_0000_m_000000_0/part-00000 to wasb://xxx@yyy.blob.core.windows.net/user/livy/ulysses.txt/part-00000
A produced log will look similar to:
## Additional logging
-The above logs should provide high-level understanding of the file system operations. If the above logs are still not providing useful information, or if you want to investigate blob storage api calls, add `fs.azure.storage.client.logging=true` to the `core-site`. This setting will enable the Java sdk logs for wasb storage driver and will print each call to blob storage server. Remove the setting after investigations because it could fill up the disk quickly and could slow down the process.
+The above logs should provide high-level understanding of the file system operations. If the above logs are still not providing useful information, or if you want to investigate blob storage api calls, add `fs.azure.storage.client.logging=true` to the `core-site`. This setting enables the Java SDK logs for wasb storage driver and print each call to blob storage server. Remove the setting after investigations because it could fill up the disk quickly and could slow down the process.
If the backend is Azure Data Lake based, then use the following log4j setting for the component(for example, spark/tez/hdfs):
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
Title: Purge history operation for Azure API for FHIR
+ Title: History Management in Azure API for FHIR
description: This article describes the $purge-history operation for Azure API for FHIR.
Last updated 09/27/2023
-# Purge history operation for Azure API for FHIR
+# History management for Azure API for FHIR
[!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)]
-`$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
+History in FHIR gives you the ability to see all previous versions of a resource. History in FHIR can be queried at the resource level, type level, or system level. The HL7 FHIR documentation has more information about the [history interaction](https://www.hl7.org/fhir/http.html#history). History is useful in scenarios where you want to see the evolution of a resource in FHIR or if you want to see the information of a resource at a specific point in time.
+
+All past versions of a resource are considered obsolete and the current version of a resource should be used for normal business workflow operations. However, it can be useful to see the state of a resource as a point in time when a past decision was made.
+
+Azure API for FHIR allows you to manage history with
+1. Disabling history
+ To disable history, one time support ticket needs to be created. After disable history configuration is set, history isn't created for resources on the FHIR server. Resource version is incremented.
+ Disabling history won't remove the existing history for any resources in your FHIR service. If you're looking to delete the existing history data in your FHIR service, you must use the $purge-history operation.
+
+1. Purge History: `$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
## Overview of purge history
For example:
```http DELETE https://workspace-fhir.fhir.azurehealthcareapis.com/Observation/123/$purge-history ```- ## Next steps In this article, you learned how to purge the history for resources in Azure API for FHIR. For more information about Azure API for FHIR, see
In this article, you learned how to purge the history for resources in Azure API
>[!div class="nextstepaction"] >[FHIR REST API capabilities for Azure API for FHIR](fhir-rest-api-capabilities.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP
|--| |[The maximum number of resource type iotconnectors/fhirdestinations has been reached.](#the-maximum-number-of-resource-type-iotconnectorsdestinations-has-been-reached)| |[The fhirServiceResourceId provided is invalid.](#the-fhirserviceresourceid-provided-is-invalid)|
-|[Ancestor resources must be fully provisioned before a child resource can be provisioned.](#ancestor-resources-must-be-fully-provisioned-before-a-child-resource-can-be-provisioned-1)
-|[The location property of child resources must match the location property of parent resources.](#the-location-property-of-child-resources-must-match-the-location-property-of-parent-resources-1)
+|[Ancestor resources must be fully provisioned before a child resource can be provisioned.](#ancestor-resources-must-be-fully-provisioned-before-a-child-resource-can-be-provisioned-1)|
+|[The location property of child resources must match the location property of parent resources.](#the-location-property-of-child-resources-must-match-the-location-property-of-parent-resources-1)|
### The maximum number of resource type iotconnectors/destinations has been reached
healthcare-apis Troubleshoot Errors Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md
The errors' names are listed in the following table, and the fixes for them are
|[InvalidFhirServiceException](#invalidfhirserviceexception)| |[InvalidQuantityFhirValueException](#invalidquantityfhirvalueexception)| |[InvalidTemplateException](#invalidtemplateexception)|
-|[ManagedIdentityCredentialNotFound](#managedidentitycredentialnotfound)
+|[ManagedIdentityCredentialNotFound](#managedidentitycredentialnotfound)|
|[MultipleResourceFoundException](#multipleresourcefoundexception)| |[NormalizationDataMappingException](#normalizationdatamappingexception)| |[PatientDeviceMismatchException](#patientdevicemismatchexception)|
To learn about the MedTech service frequently asked questions (FAQs), see
> [!div class="nextstepaction"] > [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-develop Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/iot-device-selection.md
Title: Azure IOT prototyping device selection list description: This document provides guidance on choosing a hardware device for prototyping IoT Azure solutions.--++ Previously updated : 08/03/2022 Last updated : 09/29/2023 # IoT device selection list
All boards listed support users of all experience levels.
[^1]: *If you're new to hardware programming, for MCU dev work we recommend using VS Code Arduino Extension or VS Code Platform IO Extension. For SBC dev work, you program the device like you would a laptop, that is, on the device itself. The Raspberry Pi supports VS Code development.*
-[^2]: *Devices were chosen based on availability of support resources, common boards used for prototyping and PoCs, and boards that support beginner-friendly IDEs like Arduino IDE and VS Code extensions; for example, Arduino Extension and Platform IO extension. For simplicity, we aimed to keep the total device list <6. Some of these metrics are "squishy," which means that other teams and individuals may have chosen to feature different boards based on their interpretation of the criteria.*
+[^2]: *Devices in the availability of support resources, common boards used for prototyping and PoCs, and boards that support beginner-friendly IDEs like Arduino IDE and VS Code extensions; for example, Arduino Extension and Platform IO extension. For simplicity, we aimed to keep the total device list <6. Other teams and individuals may have chosen to feature different boards based on their interpretation of the criteria.*
-[^3]: *For bringing devices to production, you'll likely want to test a PoC with a specific chipset, ST's STM32 or Microchip's Pic-IoT breakout board series, design a custom board that can be manufactured for lower cost than the MCUs and SBCs listed here, or even explore FPGA-based dev kits. You may also want to use a development environment for professional electrical engineering like STM32CubeMX or ARM mBed browser-based programmer.*
+[^3]: *For bringing devices to production, you likely want to test a PoC with a specific chipset, ST's STM32 or Microchip's Pic-IoT breakout board series, design a custom board that can be manufactured for lower cost than the MCUs and SBCs listed here, or even explore FPGA-based dev kits. You may also want to use a development environment for professional electrical engineering like STM32CubeMX or ARM mBed browser-based programmer.*
## Contents
All boards listed support users of all experience levels.
Use this document to better understand IoT terminology, device selection considerations, and to choose an IoT device for prototyping or building a proof-of-concept. We recommend the following procedure:
-1. Read through the 'what to consider when choosing a board' section below to identify needs and constraints.
+1. Read through the 'what to consider when choosing a board' section to identify needs and constraints.
2. Use the Application Selection Visual to identify possible options for your IoT scenario.
Use this document to better understand IoT terminology, device selection conside
### What to consider when choosing a board
-Below are some suggestions for criteria to consider when choosing a device for your IoT prototype.
+To choose a device for your IoT prototype, see the following criteria:
- **Microcontroller unit (MCU) or single board computer (SBC)** - An MCU is preferred for single tasks, like gathering and uploading sensor data or machine learning at the edge. MCUs also tend to be lower cost.
- - An SBC is preferred when you need multiple different tasks, like gathering sensor data and controlling another device. It may also be preferred in the early stages when there are many options for possible solutions - an SBC will enable you to try lots of different approaches.
+ - An SBC is preferred when you need multiple different tasks, like gathering sensor data and controlling another device. It may also be preferred in the early stages when there are many options for possible solutions - an SBC enables you to try lots of different approaches.
- **Processing power**
Below are some suggestions for criteria to consider when choosing a device for y
- **Power consumption**
- - **Power**: Consider how much voltage and current the board consumes. Determine if wall power is readily available or if you'll need a battery for your application.
+ - **Power**: Consider how much voltage and current the board consumes. Determine if wall power is readily available or if you need a battery for your application.
- **Connection**: Consider the physical connection to the power source. If you need battery power, check if there's a battery connection port available on the board. If there's no battery connector, seek another comparable board, or consider other ways to add battery power to your device. - **Inputs and outputs** - **Ports and pins**: Consider how many and of what types of ports and I/O pins your project may require.
- * Additional considerations include if your device will be communicating with other sensors or devices. If so, identify how many ports those signals require.
+ * Other considerations include if your device will be communicating with other sensors or devices. If so, identify how many ports those signals require.
- **Protocols**: If you're working with other sensors or devices, consider what hardware communication protocols are required. * For example, you may need CAN, UART, SPI, I2C, or other communication protocols.
Below are some suggestions for criteria to consider when choosing a device for y
- **Networking**: Consider if your device is connected to an external network or if it can be kept behind a router and/or firewall. If your prototype needs to be connected to an externally facing network, we recommend using the Azure Sphere as it is the only reliably secure device.
- - **Peripherals**: Consider if any of the peripherals your device connects to will have wireless protocols (for example, WiFi, BLE).
+ - **Peripherals**: Consider if any of the peripherals your device connects to have wireless protocols (for example, WiFi, BLE).
- **Physical location**: Consider if your device or any of the peripherals it's connected to will be accessible to the public. If so, we recommend making the device physically inaccessible. For example, in a closed, locked box.
Terminology and acronyms are listed in alphabetical order.
## MCU device list
-Following is a comparison table of MCUs in alphabetical order. Please note this is an intentionally brief list, it isn't intended to be exhaustive.
+Following is a comparison table of MCUs in alphabetical order. The list isn't not intended to be exhaustive.
>[!NOTE] >This list is for educational purposes only, it is not intended to endorse any products. Prices shown represent the average across multiple distributors and are for illustrative purposes only.
Following is a comparison table of MCUs in alphabetical order. Please note this
## SBC device list
-Following is a comparison table of SBCs in alphabetical order. Note this is an intentionally brief list, it isn't intended to be exhaustive.
+Following is a comparison table of SBCs in alphabetical order. This list isn't intended to be exhaustive.
>[!NOTE] >This list is for educational purposes only, it is not intended to endorse any products. Prices shown represent the average across multiple distributors and are for illustrative purposes only.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you prepare a development environment that's used to build the
When specifying the path used with `-Dhsm_custom_lib` in the following command, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
- **Windows:**
+ # [Windows](#tab/windows)
```cmd cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib .. ```
- **Linux:**
+ # [Linux](#tab/linux)
```bash cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=/home/<USER>/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/custom_hsm_example.a .. ```
+
+ >[!TIP] >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
iot-hub Authenticate Authorize Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-azure-ad.md
+
+ Title: Control access with Azure Active Directory
+
+description: Understand how Azure IoT Hub uses Azure Active Directory to authenticate identities and authorize access to IoT hubs and devices.
+++++ Last updated : 09/01/2023+++
+# Control access to IoT Hub by using Azure Active Directory
+
+You can use Azure Active Directory (Azure AD) to authenticate requests to Azure IoT Hub service APIs, like **create device identity** and **invoke direct method**. You can also use Azure role-based access control (Azure RBAC) to authorize those same service APIs. By using these technologies together, you can grant permissions to access IoT Hub service APIs to an Azure AD security principal. This security principal could be a user, group, or application service principal.
+
+Authenticating access by using Azure AD and controlling permissions by using Azure RBAC provides improved security and ease of use over security tokens. To minimize potential security issues inherent in security tokens, we recommend that you [enforce Azure AD authentication whenever possible](#enforce-azure-ad-authentication).
+
+> [!NOTE]
+> Authentication with Azure AD isn't supported for the IoT Hub *device APIs* (like device-to-cloud messages and update reported properties). Use [symmetric keys](authenticate-authorize-sas.md) or [X.509](authenticate-authorize-x509.md) to authenticate devices to IoT Hub.
+
+## Authentication and authorization
+
+*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*.
+
+When an Azure AD security principal requests access to an IoT Hub service API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
+
+After the Azure AD principal is authenticated, the next step is *authorization*. In this step, IoT Hub uses the Azure AD role assignment service to determine what permissions the principal has. If the principal's permissions match the requested resource or API, IoT Hub authorizes the request. So this step requires one or more Azure roles to be assigned to the security principal. IoT Hub provides some built-in roles that have common groups of permissions.
+
+## Manage access to IoT Hub by using Azure RBAC role assignment
+
+With Azure AD and RBAC, IoT Hub requires the principal requesting the API to have the appropriate level of permission for authorization. To give the principal the permission, give it a role assignment.
+
+- If the principal is a user, group, or application service principal, follow the guidance in [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- If the principal is a managed identity, follow the guidance in [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
+
+To ensure least privilege, always assign the appropriate role at the lowest possible [resource scope](#resource-scope), which is probably the IoT Hub scope.
+
+IoT Hub provides the following Azure built-in roles for authorizing access to IoT Hub service APIs by using Azure AD and RBAC:
+
+| Role | Description |
+| - | -- |
+| [IoT Hub Data Contributor](../role-based-access-control/built-in-roles.md#iot-hub-data-contributor) | Allows full access to IoT Hub data plane operations. |
+| [IoT Hub Data Reader](../role-based-access-control/built-in-roles.md#iot-hub-data-reader) | Allows full read access to IoT Hub data plane properties. |
+| [IoT Hub Registry Contributor](../role-based-access-control/built-in-roles.md#iot-hub-registry-contributor) | Allows full access to the IoT Hub device registry. |
+| [IoT Hub Twin Contributor](../role-based-access-control/built-in-roles.md#iot-hub-twin-contributor) | Allows read and write access to all IoT Hub device and module twins. |
+
+You can also define custom roles to use with IoT Hub by combining the [permissions](#permissions-for-iot-hub-service-apis) that you need. For more information, see [Create custom roles for Azure role-based access control](../role-based-access-control/custom-roles.md).
+
+### Resource scope
+
+Before you assign an Azure RBAC role to a security principal, determine the scope of access that the security principal should have. It's always best to grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
+
+This list describes the levels at which you can scope access to IoT Hub, starting with the narrowest scope:
+
+- **The IoT hub.** At this scope, a role assignment applies to the IoT hub. There's no scope smaller than an individual IoT hub. Role assignment at smaller scopes, like individual device identity or twin section, isn't supported.
+- **The resource group.** At this scope, a role assignment applies to all IoT hubs in the resource group.
+- **The subscription.** At this scope, a role assignment applies to all IoT hubs in all resource groups in the subscription.
+- **A management group.** At this scope, a role assignment applies to all IoT hubs in all resource groups in all subscriptions in the management group.
+
+## Permissions for IoT Hub service APIs
+
+The following table describes the permissions available for IoT Hub service API operations. To enable a client to call a particular operation, ensure that the client's assigned RBAC role offers sufficient permissions for the operation.
+
+| RBAC action | Description |
+|-|-|
+| `Microsoft.Devices/IotHubs/devices/read` | Read any device or module identity. |
+| `Microsoft.Devices/IotHubs/devices/write` | Create or update any device or module identity. |
+| `Microsoft.Devices/IotHubs/devices/delete` | Delete any device or module identity. |
+| `Microsoft.Devices/IotHubs/twins/read` | Read any device or module twin. |
+| `Microsoft.Devices/IotHubs/twins/write` | Write any device or module twin. |
+| `Microsoft.Devices/IotHubs/jobs/read` | Return a list of jobs. |
+| `Microsoft.Devices/IotHubs/jobs/write` | Create or update any job. |
+| `Microsoft.Devices/IotHubs/jobs/delete` | Delete any job. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action` | Send a cloud-to-device message to any device. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/feedback/action` | Receive, complete, or abandon a cloud-to-device message feedback notification. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action` | Delete all the pending commands for a device. |
+| `Microsoft.Devices/IotHubs/directMethods/invoke/action` | Invoke a direct method on any device or module. |
+| `Microsoft.Devices/IotHubs/fileUpload/notifications/action` | Receive, complete, or abandon file upload notifications. |
+| `Microsoft.Devices/IotHubs/statistics/read` | Read device and service statistics. |
+| `Microsoft.Devices/IotHubs/configurations/read` | Read device management configurations. |
+| `Microsoft.Devices/IotHubs/configurations/write` | Create or update device management configurations. |
+| `Microsoft.Devices/IotHubs/configurations/delete` | Delete any device management configuration. |
+| `Microsoft.Devices/IotHubs/configurations/applyToEdgeDevice/action` | Apply the configuration content to an edge device. |
+| `Microsoft.Devices/IotHubs/configurations/testQueries/action` | Validate the target condition and custom metric queries for a configuration. |
+
+> [!TIP]
+> - The [Bulk Registry Update](/rest/api/iothub/service/bulkregistry/updateregistry) operation requires both `Microsoft.Devices/IotHubs/devices/write` and `Microsoft.Devices/IotHubs/devices/delete`.
+> - The [Twin Query](/rest/api/iothub/service/query/gettwins) operation requires `Microsoft.Devices/IotHubs/twins/read`.
+> - [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) requires `Microsoft.Devices/IotHubs/twins/read`. [Update Digital Twin](/rest/api/iothub/service/digitaltwin/updatedigitaltwin) requires `Microsoft.Devices/IotHubs/twins/write`.
+> - Both [Invoke Component Command](/rest/api/iothub/service/digitaltwin/invokecomponentcommand) and [Invoke Root Level Command](/rest/api/iothub/service/digitaltwin/invokerootlevelcommand) require `Microsoft.Devices/IotHubs/directMethods/invoke/action`.
+
+> [!NOTE]
+> To get data from IoT Hub by using Azure AD, [set up routing to a separate event hub](iot-hub-devguide-messages-d2c.md#event-hubs-as-a-routing-endpoint). To access the [the built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before.
+
+## Enforce Azure AD authentication
+
+By default, IoT Hub supports service API access through both Azure AD and [shared access policies and security tokens](authenticate-authorize-sas.md). To minimize potential security vulnerabilities inherent in security tokens, you can disable access with shared access policies.
+
+ > [!WARNING]
+ > By denying connections using shared access policies, all users and services that connect using this method lose access immediately. Notably, since Device Provisioning Service (DPS) only supports linking IoT hubs using shared access policies, all device provisioning flows will fail with "unauthorized" error. Proceed carefully and plan to replace access with Azure AD role based access. **Do not proceed if you use DPS**.
+
+1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-by-using-azure-rbac-role-assignment) to your IoT hub. Follow the [principle of least privilege](../security/fundamentals/identity-management-best-practices.md).
+1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
+1. On the left pane, select **Shared access policies**.
+1. Under **Connect using shared access policies**, select **Deny**, and review the warning.
+ :::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot that shows how to turn off IoT Hub shared access policies." border="true":::
+
+Your IoT Hub service APIs can now be accessed only through Azure AD and RBAC.
+
+## Azure AD access from the Azure portal
+
+You can provide access to IoT Hub from the Azure portal with either shared access policies or Azure AD permissions.
+
+When you try to access IoT Hub from the Azure portal, the Azure portal first checks whether you've been assigned an Azure role with `Microsoft.Devices/iotHubs/listkeys/action`. If you have, the Azure portal uses the keys from shared access policies to access IoT Hub. If not, the Azure portal tries to access data by using your Azure AD account.
+
+To access IoT Hub from the Azure portal by using your Azure AD account, you need permissions to access IoT Hub data resources (like devices and twins). You also need permissions to go to the IoT Hub resource in the Azure portal. The built-in roles provided by IoT Hub grant access to resources like devices and twin but they don't grant access to the IoT Hub resource. So access to the portal also requires the assignment of an Azure Resource Manager role like [Reader](../role-based-access-control/built-in-roles.md#reader). The reader role is a good choice because it's the most restricted role that lets you navigate the portal. It doesn't include the `Microsoft.Devices/iotHubs/listkeys/action` permission (which provides access to all IoT Hub data resources via shared access policies).
+
+To ensure an account doesn't have access outside of the assigned permissions, don't include the `Microsoft.Devices/iotHubs/listkeys/action` permission when you create a custom role. For example, to create a custom role that can read device identities but can't create or delete devices, create a custom role that:
+
+- Has the `Microsoft.Devices/IotHubs/devices/read` data action.
+- Doesn't have the `Microsoft.Devices/IotHubs/devices/write` data action.
+- Doesn't have the `Microsoft.Devices/IotHubs/devices/delete` data action.
+- Doesn't have the `Microsoft.Devices/iotHubs/listkeys/action` action.
+
+Then, make sure the account doesn't have any other roles that have the `Microsoft.Devices/iotHubs/listkeys/action` permission, like [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). To allow the account to have resource access and navigate the portal, assign [Reader](../role-based-access-control/built-in-roles.md#reader).
+
+## Azure AD access from Azure CLI
+
+Most commands against IoT Hub support Azure AD authentication. You can control the type of authentication used to run commands by using the `--auth-type` parameter, which accepts `key` or `login` values. The `key` value is the default.
+
+- When `--auth-type` has the `key` value, as before, the CLI automatically discovers a suitable policy when it interacts with IoT Hub.
+
+- When `--auth-type` has the `login` value, an access token from the Azure CLI logged in the principal is used for the operation.
+
+For more information, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.10.12).
+
+## SDK samples
+
+- [.NET SDK sample](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/service/samples/how%20to%20guides/RoleBasedAuthenticationSample/Program.cs)
+- [Java SDK sample](https://github.com/Azure/azure-iot-service-sdk-java/tree/main/service/iot-service-samples/role-based-authorization-sample)
+
+## Next steps
+
+- For more information on the advantages of using Azure AD in your application, see [Integrating with Azure Active Directory](../active-directory/develop/how-to-integrate.md).
+- For more information on requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md).
+
+Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
iot-hub Authenticate Authorize Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-sas.md
+
+ Title: Control access with shared access signatures
+
+description: Understand how Azure IoT Hub uses shared access signatures (SAS) to authenticate identities and authorize access to IoT hubs and devices.
++++ Last updated : 09/01/2023+++
+# Control access to IoT Hub with shared access signatures
+
+IoT Hub uses shared access signature (SAS) tokens to authenticate devices and services to avoid sending keys on the wire. You use SAS tokens to grant time-bounded access to devices and services to specific functionality in IoT Hub. To get authorization to connect to IoT Hub, devices and services must send SAS tokens signed with either a shared access or symmetric key. Symmetric keys are stored with a device identity in the identity registry.
+
+This article introduces:
+
+* The different permissions that you can grant to a client to access your IoT hub.
+* The tokens IoT Hub uses to verify permissions.
+* How to scope credentials to limit access to specific resources.
+* Custom device authentication mechanisms that use existing device identity registries or authentication schemes.
++
+IoT Hub uses *permissions* to grant access to each IoT hub endpoint. Permissions limit access to an IoT hub based on functionality. You must have appropriate permissions to access any of the IoT Hub endpoints. For example, a device must include a token containing security credentials along with every message it sends to IoT Hub. However, the signing keys, like the device symmetric keys, are never sent over the wire.
+
+## Authentication and authorization
+
+*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*.
+
+This article describes authentication and authorization using **Shared access signatures**, which lets you group permissions and grant them to applications using access keys and signed security tokens. You can also use symmetric keys or shared access keys to authenticate a device with IoT Hub. SAS tokens provide authentication for each call made by the device to IoT Hub by associating the symmetric key to each call.
+
+## Access control and permissions
+
+Use shared access policies for IoT hub-level access, and use the individual device credentials to scope access to that device only.
+
+### IoT hub-level shared access policies
+
+Shared access policies can grant any combination of permissions. You can define policies in the [Azure portal](https://portal.azure.com), programmatically by using the [IoT Hub Resource REST APIs](/rest/api/iothub/iothubresource), or using the Azure CLI [az iot hub policy](/cli/azure/iot/hub/policy) command. A newly created IoT hub has the following default policies:
+
+| Shared Access Policy | Permissions |
+| -- | -- |
+| iothubowner | All permissions |
+| service | **ServiceConnect** permissions |
+| device | **DeviceConnect** permissions |
+| registryRead | **RegistryRead** permissions |
+| registryReadWrite | **RegistryRead** and **RegistryWrite** permissions |
+
+You can use the following permissions to control access to your IoT hub:
+
+* The **ServiceConnect** permission is used by back-end cloud services and grants the following access:
+ * Access to cloud service-facing communication and monitoring endpoints.
+ * Receive device-to-cloud messages, send cloud-to-device messages, and retrieve the corresponding delivery acknowledgments.
+ * Retrieve delivery acknowledgments for file uploads.
+ * Access twins to update tags and desired properties, retrieve reported properties, and run queries.
+
+* The **DeviceConnect** permission is used by devices and grants the following access:
+ * Access to device-facing endpoints.
+ * Send device-to-cloud messages and receive cloud-to-device messages.
+ * Perform file upload.
+ * Receive device twin desired property notifications and update device twin reported properties.
+
+* The **RegistryRead** permission is used by back-end cloud services and grants the following access:
+ * Read access to the identity registry. For more information, see [Identity registry](iot-hub-devguide-identity-registry.md).
+
+* The **RegistryReadWrite** permission is used by back-end cloud services and grants the following access:
+ * Read and write access to the identity registry. For more information, see [Identity registry](iot-hub-devguide-identity-registry.md).
+
+### Per-device security credentials
+
+Every IoT hub has an identity registry that stores information about the devices and modules permitted to connect to it. Before a device or module can connect, there must be an entry for that device or module in the IoT hub's identity registry. A device or module authenticates with the IoT hub based on credentials stored in the identity registry.
+
+When you register a device to use SAS token authentication, that device gets two *symmetric keys*. Symmetric keys grant the **DeviceConnect** permission for the associated device identity.
+
+## Use SAS tokens from services
+
+Services can generate SAS tokens by using a shared access policy that defines the appropriate permissions as explained previously in the [Access control and permissions](#access-control-and-permissions) section.
+
+As an example, a service using the precreated shared access policy called **registryRead** would create a token with the following parameters:
+
+* resource URI: `{IoT hub name}.azure-devices.net`,
+* signing key: one of the keys of the `registryRead` policy,
+* policy name: `registryRead`,
+* any expiration time.
+
+For example, the following code creates a SAS token in Node.js:
+
+```javascript
+var endpoint = "myhub.azure-devices.net";
+var policyName = 'registryRead';
+var policyKey = '...';
+
+var token = generateSasToken(endpoint, policyKey, policyName, 60);
+```
+
+The result, which grants access to read all device identities in the identity registry, would be:
+
+`SharedAccessSignature sr=myhub.azure-devices.net&sig=JdyscqTpXdEJs49elIUCcohw2DlFDR3zfH5KqGJo4r4%3D&se=1456973447&skn=registryRead`
+
+For more examples, see [Generate SAS tokens](#generate-sas-tokens).
+
+For services, SAS tokens only grant permissions at the IoT Hub level. That is, a service authenticating with a token based on the **service** policy will be able to perform all the operations granted by the **ServiceConnect** permission. These operations include receiving device-to-cloud messages, sending cloud-to-device messages, and so on. If you want to grant more granular access to your services, for example, limiting a service to only sending cloud-to-device messages, you can use Azure Active Directory. To learn more, see [Authenticate with Azure AD](authenticate-authorize-azure-ad.md).
+
+## Use SAS tokens from devices
+
+There are two ways to obtain **DeviceConnect** permissions with IoT Hub with SAS tokens: use a [symmetric device key from the identity registry](#use-a-symmetric-key-in-the-identity-registry), or use a [shared access key](#use-a-shared-access-policy-to-access-on-behalf-of-a-device).
+
+All functionality accessible from devices is exposed by design on endpoints with the prefix `/devices/{deviceId}`.
+
+The device-facing endpoints are (irrespective of the protocol):
+
+| Endpoint | Functionality |
+| | |
+| `{iot hub name}/devices/{deviceId}/messages/events` |Send device-to-cloud messages. |
+| `{iot hub name}/devices/{deviceId}/messages/devicebound` |Receive cloud-to-device messages. |
+
+### Use a symmetric key in the identity registry
+
+When using a device identity's symmetric key to generate a token, the policyName (`skn`) element of the token is omitted.
+
+For example, a token created to access all device functionality should have the following parameters:
+
+* resource URI: `{IoT hub name}.azure-devices.net/devices/{device id}`,
+* signing key: any symmetric key for the `{device id}` identity,
+* no policy name,
+* any expiration time.
+
+For example, the following code creates a SAS token in Node.js:
+
+```javascript
+var endpoint ="myhub.azure-devices.net/devices/device1";
+var deviceKey ="...";
+
+var token = generateSasToken(endpoint, deviceKey, null, 60);
+```
+
+The result, which grants access to all functionality for device1, would be:
+
+`SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697`
+
+For more examples, see [Generate SAS tokens](#generate-sas-tokens).
+
+### Use a shared access policy to access on behalf of a device
+
+When you create a token from a shared access policy, set the `skn` field to the name of the policy. This policy must grant the **DeviceConnect** permission.
+
+The two main scenarios for using shared access policies to access device functionality are:
+
+* [cloud protocol gateways](iot-hub-devguide-endpoints.md),
+* [token services](#create-a-token-service-to-integrate-existing-devices) used to implement custom authentication schemes.
+
+Since the shared access policy can potentially grant access to connect as any device, it is important to use the correct resource URI when creating SAS tokens. This setting is especially important for token services, which have to scope the token to a specific device using the resource URI. This point is less relevant for protocol gateways as they are already mediating traffic for all devices.
+
+As an example, a token service using the precreated shared access policy called **device** would create a token with the following parameters:
+
+* resource URI: `{IoT hub name}.azure-devices.net/devices/{device id}`,
+* signing key: one of the keys of the `device` policy,
+* policy name: `device`,
+* any expiration time.
+
+For example, the following code creates a SAS token in Node.js:
+
+```javascript
+var endpoint ="myhub.azure-devices.net/devices/device1";
+var policyName = 'device';
+var policyKey = '...';
+
+var token = generateSasToken(endpoint, policyKey, policyName, 60);
+```
+
+The result, which grants access to all functionality for device1, would be:
+
+`SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697&skn=device`
+
+A protocol gateway could use the same token for all devices by setting the resource URI to `myhub.azure-devices.net/devices`.
+
+For more examples, see [Generate SAS tokens](#generate-sas-tokens).
+
+## Create a token service to integrate existing devices
+
+You can use the IoT Hub [identity registry](iot-hub-devguide-identity-registry.md) to configure per-device or per-module security credentials and access control using tokens. If an IoT solution already has a custom identity registry and/or authentication scheme, consider creating a *token service* to integrate this infrastructure with IoT Hub. In this way, you can use other IoT features in your solution.
+
+A token service is a custom cloud service. It uses an IoT Hub *shared access policy* with the **DeviceConnect** permission to create *device-scoped* or *module-scoped* tokens. These tokens enable a device or module to connect to your IoT hub.
+
+![Diagram that shows the steps of the token service pattern.](./media/iot-hub-devguide-security/tokenservice.png)
+
+Here are the main steps of the token service pattern:
+
+1. Create an IoT Hub shared access policy with the **DeviceConnect** permission for your IoT hub. You can create this policy in the Azure portal or programmatically. The token service uses this policy to sign the tokens it creates.
+
+2. When a device or module needs to access your IoT hub, it requests a signed token from your token service. The device can authenticate with your custom identity registry/authentication scheme to determine the device/module identity that the token service uses to create the token.
+
+3. The token service returns a token. The token is created by using `/devices/{deviceId}` or `/devices/{deviceId}/modules/{moduleId}` as `resourceURI`, with `deviceId` as the device being authenticated and `moduleId` as the module being authenticated. The token service uses the shared access policy to construct the token.
+
+4. The device/module uses the token directly with the IoT hub.
+
+> [!NOTE]
+> You can use the .NET class [SharedAccessSignatureBuilder](/dotnet/api/microsoft.azure.devices.common.security.sharedaccesssignaturebuilder) or the Java class [IotHubServiceSasToken](/java/api/com.microsoft.azure.sdk.iot.service.auth.iothubservicesastoken) to create a token in your token service.
+
+The token service can set the token expiration as desired. When the token expires, the IoT hub severs the device/module connection. Then, the device/module must request a new token from the token service. A short expiry time increases the load on both the device/module and the token service.
+
+For a device/module to connect to your hub, you must still add it to the IoT Hub identity registryΓÇöeven though it is using a token and not a key to connect. Therefore, you can continue to use per-device/per-module access control by enabling or disabling device/module identities in the identity registry. This approach mitigates the risks of using tokens with long expiry times.
+
+### Comparison with a custom gateway
+
+The token service pattern is the recommended way to implement a custom identity registry/authentication scheme with IoT Hub. This pattern is recommended because IoT Hub continues to handle most of the solution traffic. However, if the custom authentication scheme is so intertwined with the protocol, you may require a *custom gateway* to process all the traffic. An example of such a scenario is using [Transport Layer Security (TLS) and preshared keys (PSKs)](https://tools.ietf.org/html/rfc4279). For more information, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md).
+
+## Generate SAS tokens
+
+Azure IoT SDKs automatically generate tokens, but some scenarios do require you to generate and use SAS tokens directly, including:
+
+* The direct use of the MQTT, AMQP, or HTTPS surfaces.
+
+* The implementation of the token service pattern, as explained in the [Create a token service](#create-a-token-service-to-integrate-existing-devices) section.
+
+A token signed with a shared access key grants access to all the functionality associated with the shared access policy permissions. A token signed with a device identity's symmetric key only grants the **DeviceConnect** permission for the associated device identity.
+
+This section provides examples of generating SAS tokens in different code languages. You can also generate SAS tokens with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
+
+### SAS token structure
+
+A SAS token has the following format:
+
+`SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}`
+
+Here are the expected values:
+
+| Value | Description |
+| | |
+| {signature} |An HMAC-SHA256 signature string of the form: `{URL-encoded-resourceURI} + "\n" + expiry`. **Important**: The key is decoded from base64 and used as key to perform the HMAC-SHA256 computation. |
+| {resourceURI} |URI prefix (by segment) of the endpoints that can be accessed with this token, starting with host name of the IoT hub (no protocol). SAS tokens granted to backend services are scoped to the IoT hub level; for example, `myHub.azure-devices.net`. SAS tokens granted to devices must be scoped to an individual device; for example, `myHub.azure-devices.net/devices/device1`. |
+| {expiry} |UTF8 strings for number of seconds since the epoch 00:00:00 UTC on 1 January 1970. |
+| {URL-encoded-resourceURI} |Lower case URL-encoding of the lower case resource URI |
+| {policyName} |The name of the shared access policy to which this token refers. Absent if the token refers to device-registry credentials. |
+
+The URI prefix is computed by segment and not by character. For example, `/a/b` is a prefix for `/a/b/c` but not for `/a/bc`.
+
+### [Node.js](#tab/node)
+
+The following code generates a SAS token using the resource URI, signing key, policy name, and expiration period. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```javascript
+var generateSasToken = function(resourceUri, signingKey, policyName, expiresInMins) {
+ resourceUri = encodeURIComponent(resourceUri);
+
+ // Set expiration in seconds
+ var expires = (Date.now() / 1000) + expiresInMins * 60;
+ expires = Math.ceil(expires);
+ var toSign = resourceUri + '\n' + expires;
+
+ // Use crypto
+ var hmac = crypto.createHmac('sha256', Buffer.from(signingKey, 'base64'));
+ hmac.update(toSign);
+ var base64UriEncoded = encodeURIComponent(hmac.digest('base64'));
+
+ // Construct authorization string
+ var token = "SharedAccessSignature sr=" + resourceUri + "&sig="
+ + base64UriEncoded + "&se=" + expires;
+ if (policyName) token += "&skn="+policyName;
+ return token;
+};
+```
+
+### [Python](#tab/python)
+
+The following code generates a SAS token using the resource URI, signing key, policy name, and expiration period. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```python
+from base64 import b64encode, b64decode
+from hashlib import sha256
+from time import time
+from urllib import parse
+from hmac import HMAC
+
+def generate_sas_token(uri, key, policy_name, expiry=3600):
+ ttl = time() + expiry
+ sign_key = "%s\n%d" % ((parse.quote_plus(uri)), int(ttl))
+ print(sign_key)
+ signature = b64encode(HMAC(b64decode(key), sign_key.encode('utf-8'), sha256).digest())
+
+ rawtoken = {
+ 'sr' : uri,
+ 'sig': signature,
+ 'se' : str(int(ttl))
+ }
+
+ if policy_name is not None:
+ rawtoken['skn'] = policy_name
+
+ return 'SharedAccessSignature ' + parse.urlencode(rawtoken)
+```
+
+### [C#](#tab/csharp)
+
+The following code generates a SAS token using the resource URI, signing key, policy name, and expiration period. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```csharp
+using System;
+using System.Globalization;
+using System.Net;
+using System.Net.Http;
+using System.Security.Cryptography;
+using System.Text;
+
+public static string GenerateSasToken(string resourceUri, string key, string policyName, int expiryInSeconds = 3600)
+{
+ TimeSpan fromEpochStart = DateTime.UtcNow - new DateTime(1970, 1, 1);
+ string expiry = Convert.ToString((int)fromEpochStart.TotalSeconds + expiryInSeconds);
+
+ string stringToSign = WebUtility.UrlEncode(resourceUri) + "\n" + expiry;
+
+ HMACSHA256 hmac = new HMACSHA256(Convert.FromBase64String(key));
+ string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
+
+ string token = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}", WebUtility.UrlEncode(resourceUri), WebUtility.UrlEncode(signature), expiry);
+
+ if (!String.IsNullOrEmpty(policyName))
+ {
+ token += "&skn=" + policyName;
+ }
+
+ return token;
+}
+```
+
+### [Java](#tab/java)
+
+The following code generates a SAS token using the resource URI and signing key. The expiration period is set to one hour from the current time. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```java
+public static String generateSasToken(String resourceUri, String key) throws Exception {
+ // Token will expire in one hour
+ var expiry = Instant.now().getEpochSecond() + 3600;
+
+ String stringToSign = URLEncoder.encode(resourceUri, StandardCharsets.UTF_8) + "\n" + expiry;
+ byte[] decodedKey = Base64.getDecoder().decode(key);
+
+ Mac sha256HMAC = Mac.getInstance("HmacSHA256");
+ SecretKeySpec secretKey = new SecretKeySpec(decodedKey, "HmacSHA256");
+ sha256HMAC.init(secretKey);
+ Base64.Encoder encoder = Base64.getEncoder();
+
+ String signature = new String(encoder.encode(
+ sha256HMAC.doFinal(stringToSign.getBytes(StandardCharsets.UTF_8))), StandardCharsets.UTF_8);
+
+ String token = "SharedAccessSignature sr=" + URLEncoder.encode(resourceUri, StandardCharsets.UTF_8)
+ + "&sig=" + URLEncoder.encode(signature, StandardCharsets.UTF_8.name()) + "&se=" + expiry;
+
+ return token;
+}
+```
++
+### Protocol specifics
+
+Each supported protocol, such as MQTT, AMQP, and HTTPS, transports tokens in different ways.
+
+When using MQTT, the CONNECT packet has the deviceId as the ClientId, `{iothubhostname}/{deviceId}` in the Username field, and a SAS token in the Password field. `{iothubhostname}` should be the full CName of the IoT hub (for example, myhub.azure-devices.net).
+
+When using [AMQP](https://www.amqp.org/), IoT Hub supports [SASL PLAIN](https://tools.ietf.org/html/rfc4616) and [AMQP Claims-Based-Security](https://www.oasis-open.org/committees/download.php/50506/amqp-cbs-v1%200-wd02%202013-08-12.doc).
+
+If you use AMQP claims-based-security, the standard specifies how to transmit these tokens.
+
+For SASL PLAIN, the **username** can be:
+
+* `{policyName}@sas.root.{iothubName}` if using IoT hub-level tokens.
+* `{deviceId}@sas.{iothubname}` if using device-scoped tokens.
+
+In both cases, the password field contains the token, as described in [SAS token structure](#sas-token-structure).
+
+HTTPS implements authentication by including a valid token in the **Authorization** request header.
+
+For example, Username (DeviceId is case-sensitive):
+`iothubname.azure-devices.net/DeviceId`
+
+Password (You can generate a SAS token with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)):
+
+`SharedAccessSignature sr=iothubname.azure-devices.net%2fdevices%2fDeviceId&sig=kPszxZZZZZZZZZZZZZZZZZAhLT%2bV7o%3d&se=1487709501`
+
+> [!NOTE]
+> The [Azure IoT SDKs](iot-hub-devguide-sdks.md) automatically generate tokens when connecting to the service. In some cases, the Azure IoT SDKs do not support all the protocols or all the authentication methods.
+
+### Special considerations for SASL PLAIN
+
+When using SASL PLAIN with AMQP, a client connecting to an IoT hub can use a single token for each TCP connection. When the token expires, the TCP connection disconnects from the service and triggers a reconnection. This behavior, while not problematic for a back-end app, is damaging for a device app for the following reasons:
+
+* Gateways usually connect on behalf of many devices. When using SASL PLAIN, they have to create a distinct TCP connection for each device connecting to an IoT hub. This scenario considerably increases the consumption of power and networking resources, and increases the latency of each device connection.
+
+* Resource-constrained devices are adversely affected by the increased use of resources to reconnect after each token expiration.
+
+## Next steps
+
+Now that you have learned how to control access IoT Hub, you may be interested in the following IoT Hub developer guide topics:
+
+* [Use device twins to synchronize state and configurations](iot-hub-devguide-device-twins.md)
+* [Invoke a direct method on a device](iot-hub-devguide-direct-methods.md)
+* [Schedule jobs on multiple devices](iot-hub-devguide-jobs.md)
iot-hub Authenticate Authorize X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-x509.md
+
+ Title: Authenticate with X.509 certificates
+
+description: Understand how Azure IoT Hub uses X.509 certificates to authenticate IoT hubs and devices.
+++++ Last updated : 09/01/2023+++
+# Authenticate identities with X.509 certificates
+
+IoT Hub uses X.509 certificates to authenticate devices. X.509 authentication allows authentication of an IoT device at the physical layer as part of the Transport Layer Security (TLS) standard connection establishment.
+
+An X.509 CA certificate is a digital certificate that can sign other certificates. A digital certificate is considered an X.509 certificate if it conforms to the certificate formatting standard prescribed by IETF's RFC 5280 standard. A certificate authority (CA) means that its holder can sign other certificates.
+
+This article describes how to use X.509 certificate authority (CA) certificates to authenticate devices connecting to IoT Hub, which includes the following steps:
+
+* How to get an X.509 CA certificate
+* How to register the X.509 CA certificate to IoT Hub
+* How to sign devices using X.509 CA certificates
+* How devices signed with X.509 CA are authenticated
++
+The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process and supply chain logistics during device manufacturing.
+
+## Authentication and authorization
+
+*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*.
+
+This article describes authentication using **X.509 certificates**. You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub.
+
+X.509 certificates are used for authentication in IoT Hub, not authorization. Unlike with Azure Active Directory and shared access signatures, you can't customize permissions with X.509 certificates.
+
+## Enforce X.509 authentication
+
+For additional security, an IoT hub can be configured to not allow SAS authentication for devices and modules, leaving X.509 as the only accepted authentication option. Currently, this feature isn't available in Azure portal. To configure, set `disableDeviceSAS` and `disableModuleSAS` to `true` on the IoT Hub resource properties:
+
+```azurecli
+az resource update -n <iothubName> -g <resourceGroupName> --resource-type Microsoft.Devices/IotHubs --set properties.disableDeviceSAS=true properties.disableModuleSAS=true
+```
+
+## Benefits of X.509 CA certificate authentication
+
+X.509 certificate authority (CA) authentication is an approach for authenticating devices to IoT Hub using a method that dramatically simplifies device identity creation and life-cycle management in the supply chain.
+
+A distinguishing attribute of X.509 CA authentication is the one-to-many relationship that a CA certificate has with its downstream devices. This relationship enables registration of any number of devices into IoT Hub by registering an X.509 CA certificate once. Otherwise, unique certificates would have to be pre-registered for every device before a device can connect. This one-to-many relationship also simplifies device certificates lifecycle management operations.
+
+Another important attribute of X.509 CA authentication is simplification of supply chain logistics. Secure authentication of devices requires that each device holds a unique secret like a key as the basis for trust. In certificate-based authentication, this secret is a private key. A typical device manufacturing flow involves multiple steps and custodians. Securely managing device private keys across multiple custodians and maintaining trust is difficult and expensive. Using certificate authorities solves this problem by signing each custodian into a cryptographic chain of trust rather than entrusting them with device private keys. Each custodian signs devices at their respective step of the manufacturing flow. The overall result is an optimal supply chain with built-in accountability through use of the cryptographic chain of trust.
+
+This process yields the most security when devices protect their unique private keys. To this end, we recommend using Hardware Secure Modules (HSM) capable of internally generating private keys.
+
+The Azure IoT Hub Device Provisioning Service (DPS) makes it easy to provision groups of devices to hubs. For more information, see [Tutorial: Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
+
+## Get an X.509 CA certificate
+
+The X.509 CA certificate is the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it.
+
+For production environments, we recommend that you purchase an X.509 CA certificate from a professional certificate services provider. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services.
+
+You may also create a self-signed X.509 CA certificate for testing purposes. For more information about creating certificates for testing, see [Create and upload certificates for testing](tutorial-x509-test-certs.md).
+
+>[!NOTE]
+>We do not recommend the use of self-signed certificates for production environments.
+
+Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication.
+
+## Sign devices into the certificate chain of trust
+
+The owner of an X.509 CA certificate can cryptographically sign an intermediate CA that can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device certificate. The result is a cascaded chain of certificates known as a *certificate chain of trust*. This delegation of trust is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys.
+
+![Diagram that shows the certificates in a chain of trust.](./media/generic-cert-chain-of-trust.png)
+
+The device certificate (also called a leaf certificate) must have its common name (CN) set to the **device ID** (`CN=deviceId`) that was used when registering the IoT device in Azure IoT Hub. This setting is required for authentication.
+
+For modules using X.509 authentication, the module's certificate must have its common name (CN) formatted like `CN=deviceId/moduleId`.
+
+Learn how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) as done when signing devices.
+
+## Register the X.509 CA certificate to IoT Hub
+
+Register your X.509 CA certificate to IoT Hub, which uses it to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession.
+
+The upload process entails uploading a file that contains your certificate. This file should never contain any private keys.
+
+The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. You can choose to either automatically or manually verify ownership. For manual verification, Azure IoT Hub generates a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step and manually verify your certificate by uploading a file containing the results.
+
+Learn how to [register your CA certificate](tutorial-x509-test-certs.md#register-your-subordinate-ca-certificate-to-your-iot-hub).
+
+## Authenticate devices signed with X.509 CA certificates
+
+Every IoT hub has an identity registry that stores information about the devices and modules permitted to connect to it. Before a device or module can connect, there must be an entry for that device or module in the IoT hub's identity registry. A device or module authenticates with the IoT hub based on credentials stored in the identity registry.
+
+With your X.509 CA certificate registered and devices signed into a certificate chain of trust, the final step is device authentication when the device connects. When an X.509 CA-signed device connects, it uploads its certificate chain for validation. The chain includes all intermediate CA and device certificates. With this information, IoT Hub authenticates the device in a two-step process. IoT Hub cryptographically validates the certificate chain for internal consistency, and then issues a proof-of-possession challenge to the device. IoT Hub declares the device authentic on a successful proof-of-possession response from the device. This declaration assumes that the device's private key is protected and that only the device can successfully respond to this challenge. We recommend using secure chips like Hardware Secure Modules (HSM) in devices to protect private keys.
+
+A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
+
+## Revoke a device certificate
+
+IoT Hub doesn't check certificate revocation lists from the certificate authority when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub).
+
+## Example scenario
+
+Company-X makes Smart-X-Widgets that are designed for professional installation. Company-X outsources both manufacturing and installation. Factory-Y manufactures the Smart-X-Widgets and Technician-Z installs them. Company-X wants the Smart-X-Widget shipped directly from Factory-Y to Technician-Z for installation and then for it to connect directly to Company-X's instance of IoT Hub. To make this happen, Company-X need to complete a few one-time setup operations to prime Smart-X-Widget for automatic connection. This end-to-end scenario includes the following steps:
+
+1. Acquire the X.509 CA certificate
+
+2. Register the X.509 CA certificate to IoT Hub
+
+3. Sign devices into a certificate chain of trust
+
+4. Connect the devices
+
+These steps are demonstrated in [Tutorial: Create and upload certificates for testing](./tutorial-x509-test-certs.md).
+
+### Acquire the certificate
+
+Company-X can either purchase an X.509 CA certificate from a public root certificate authority or create one through a self-signed process. Either option entails two basic steps: generating a public/private key pair and signing the public key into a certificate.
+
+Details on how to accomplish these steps differ with various service providers.
++
+#### Purchase a certificate
+
+Purchasing a CA certificate has the benefit of having a well-known root CA act as a trusted third party to vouch for the legitimacy of IoT devices when the devices connect. Choose this option if your devices interact with third-party products or services.
+
+To purchase an X.509 CA certificate, choose a root certificate service provider. The root CA provider will guide you on how to create the public/private key pair and how to generate a certificate signing request (CSR) for their services. A CSR is the formal process of applying for a certificate from a certificate authority. The outcome of this purchase is a certificate for use as an authority certificate. Given the ubiquity of X.509 certificates, the certificate is likely to have been properly formatted to IETF's RFC 5280 standard.
+
+#### Create a self-signed certificate
+
+The process to create a self-signed X.509 CA certificate is similar to purchasing one, except that it doesn't involve a third-party signer like the root certificate authority. In our example, Company-X would sign its authority certificate instead of a root certificate authority.
+
+You might choose this option for testing until you're ready to purchase an authority certificate. You could also use a self-signed X.509 CA certificate in production if your devices don't connect to any third-party services outside of IoT Hub.
+
+### Register the certificate to IoT Hub
+
+Company-X needs to register the X.509 CA to IoT Hub where it serves to authenticate Smart-X-Widgets as they connect. This one-time process enables the ability to authenticate and manage any number of Smart-X-Widget devices. The one-to-many relationship between CA certificate and device certificates is one of the main advantages of using the X.509 CA authentication method. The alternative would be to upload individual certificate thumbprints for each and every Smart-X-Widget device, thereby adding to operational costs.
+
+Registering the X.509 CA certificate is a two-step process: upload the certificate then provide proof-of-possession.
++
+#### Upload the certificate
+
+The X.509 CA certificate upload process is just that: uploading the CA certificate to IoT Hub. IoT Hub expects the certificate in a file.
+
+The certificate file must not under any circumstances contain any private keys. Best practices from standards governing Public Key Infrastructure (PKI) mandates that knowledge of Company-X's private key resides exclusively within Company-X.
+
+#### Prove possession
+
+The X.509 CA certificate, just like any digital certificate, is public information that is susceptible to eavesdropping. As such, an eavesdropper may intercept a certificate and try to upload it as their own. In our example, IoT Hub has to make sure that the CA certificate Company-X uploaded really belongs to Company-X. It does so by challenging Company-X to prove that they possess the certificate through a [proof-of-possession (PoP) flow](https://tools.ietf.org/html/rfc5280#section-3.1).
+
+For the proof-of-possession flow, IoT Hub generates a random number to be signed by Company-X using its private key. If Company-X followed PKI best practices and protected their private key, then only they would be able to correctly respond to the proof-of-possession challenge. IoT Hub proceeds to register the X.509 CA certificate upon a successful response of the proof-of-possession challenge.
+
+A successful response to the proof-of-possession challenge from IoT Hub completes the X.509 CA registration.
+
+### Sign devices into a certificate chain of trust
+
+IoT requires a unique identity for every device that connects. For certificate-based authentication, these identities are in the form of certificates. In our example, certificate-based authentication means that every Smart-X-Widget must possess a unique device certificate.
+
+A valid but inefficient way to provide unique certificates on each device is to pre-generate certificates for Smart-X-Widgets and to trust supply chain partners with the corresponding private keys. For Company-X, this means entrusting both Factory-Y and Technician-Z. This method comes with challenges that must be overcome to ensure trust, as follows:
+
+* Having to share device private keys with supply chain partners, besides ignoring PKI best practices of never sharing private keys, makes building trust in the supply chain expensive. It requires systems like secure rooms to house device private keys and processes like periodic security audits. Both add cost to the supply chain.
+
+* Securely accounting for devices in the supply chain, and later managing them in deployment, becomes a one-to-one task for every key-to-device pair from the point of device unique certificate (and private key) generation to device retirement. This precludes group management of devices unless the concept of groups is explicitly built into the process somehow. Secure accounting and device life-cycle management, therefore, becomes a heavy operations burden.
+
+X.509 CA certificate authentication offers elegant solutions to these challenges by using certificate chains. A certificate chain results from a CA signing an intermediate CA that in turn signs another intermediate CA, and so on, until a final intermediate CA signs a device. In our example, Company-X signs Factory-Y, which in turn signs Technician-Z that finally signs Smart-X-Widget.
++
+This cascade of certificates in the chain represents the logical hand-off of authority. Many supply chains follow this logical hand-off whereby each intermediate CA gets signed into the chain while receiving all upstream CA certificates, and the last intermediate CA finally signs each device and injects all the authority certificates from the chain into the device. This hand-off is common when the contracted manufacturing company with a hierarchy of factories commissions a particular factory to do the manufacturing. While the hierarchy may be several levels deep (for example, by geography/product type/manufacturing line), only the factory at the end gets to interact with the device but the chain is maintained from the top of the hierarchy.
+
+Alternate chains may have different intermediate CAs interact with the device in which case the CA interacting with the device injects certificate chain content at that point. Hybrid models are also possible where only some of the CA has physical interaction with the device.
+
+The following diagram shows how the certificate chain of trust comes together in our Smart-X-Widget example.
++
+1. Company-X never physically interacts with any of the Smart-X-Widgets. It initiates the certificate chain of trust by signing Factory-Y's intermediate CA certificate.
+1. Factory-Y now has its own intermediate CA certificate and a signature from Company-X. It passes copies of these items to the device. It also uses its intermediate CA certificate to sign Technician-Z's intermediate CA certificate and the Smart-X-Widget device certificate.
+1. Technician-Z now has its own intermediate CA certificate and a signature from Factory-Y. It passes copies of these items to the device. It also uses its intermediate CA certificate to sign the Smart-X-Widget device certificate.
+1. Every Smart-X-Widget device now has its own unique device certificate and copies of the public keys and signatures from each intermediate CA certificate that it interacted with throughout the supply chain. These certificates and signatures can be traced back to the original Company-X root.
+
+The CA method of authentication infuses secure accountability into the device manufacturing supply chain. Because of the certificate chain process, the actions of every member in the chain are cryptographically recorded and verifiable.
+
+This process relies on the assumption that the unique device public/private key pair is created independently and that the private key is protected within the device always. Fortunately, secure silicon chips exist in the form of Hardware Secure Modules (HSM) that are capable of internally generating keys and protecting private keys. Company-X only needs to add one such secure chip into Smart-X-Widget's component bill of materials.
+
+### Authenticate devices
+
+Once the top level CA certificate is registered to IoT Hub and the devices have their unique certificates, how do they connect? By registering an X.509 CA certificate to IoT Hub one time, how do potentially millions of devices connect and get authenticated from the first time? Through the same certificate upload and proof-of-possession flow that we earlier encountered with registering the X.509 CA certificate.
+
+Devices manufactured for X.509 CA authentication are equipped with unique device certificates and a certificate chain from their respective manufacturing supply chain. Device connection, even for the first time, happens in a two-step process: certificate chain upload and proof-of-possession.
+
+During the certificate chain upload, the device uploads its unique certificate and its certificate chain to IoT Hub. Using the pre-registered X.509 CA certificate, IoT Hub validates that the uploaded certificate chain is internally consistent and that the chain was originated by the valid owner of the X.509 CA certificate. As with the X.509 CA registration process, IoT Hub uses a proof-of-possession challenge-response process to ascertain that the chain, and therefore the device certificate, belongs to the device uploading it. A successful response triggers IoT Hub to accept the device as authentic and grant it connection.
+
+In our example, each Smart-X-Widget would upload its device unique certificate together with Factory-Y and Technician-Z X.509 CA certificates and then respond to the proof-of-possession challenge from IoT Hub.
++
+The foundation of trust rests in protecting private keys, including device private keys. We therefore can't stress enough the importance of secure silicon chips in the form of Hardware Secure Modules (HSM) for protecting device private keys, and the overall best practice of never sharing any private keys, like one factory entrusting another with its private key.
+
+## Next steps
+
+Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
+
+To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md).
+
+If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-security.md
- Title: Access control and security for IoT Hub
-description: Overview on how to control access to IoT Hub, includes links to depth articles on AAD integration and SAS options.
----- Previously updated : 04/15/2021---
-# Control access to IoT Hub
-
-This article describes the options for securing your IoT hub. IoT Hub uses *permissions* to grant access to each IoT hub endpoint. Permissions limit the access to an IoT hub based on functionality.
-
-There are three different ways for controlling access to IoT Hub:
--- **Azure Active Directory (Azure AD) integration** for service APIs. Azure provides identity-based authentication with AAD and fine-grained authorization with Azure role-based access control (Azure RBAC). Azure AD and RBAC integration is supported for IoT hub service APIs only. To learn more, see [Control access to IoT Hub using Azure Active Directory](iot-hub-dev-guide-azure-ad-rbac.md).-- **Shared access signatures** lets you group permissions and grant them to applications using access keys and signed security tokens. To learn more, see [Control access to IoT Hub using shared access signature](iot-hub-dev-guide-sas.md). -- **Per-device security credentials**. Each IoT Hub contains an [identity registry](iot-hub-devguide-identity-registry.md) For each device in this identity registry, you can configure security credentials that grant DeviceConnect permissions scoped to the that device's endpoints. To learn more, see [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub).--
-> [!Tip]
-> You can enable a lock on your IoT resources to prevent them being accidentally or maliciously deleted. To learn more about Azure Resource locks, please visit, [Lock your resources to protect your infrastructure](../azure-resource-manager/management/lock-resources.md?tabs=json)
-
-## Next steps
--- [Control access to IoT Hub using Azure Active Directory](iot-hub-dev-guide-azure-ad-rbac.md)-- [Control access to IoT Hub using shared access signature](iot-hub-dev-guide-sas.md)-- [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub)
iot-hub Iot Hub X509 Certificate Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509-certificate-concepts.md
- Title: Understand cryptography and X.509 certificates for Azure IoT Hub | Microsoft Docs
-description: Understand cryptography and X.509 PKI for Azure IoT Hub
---- Previously updated : 01/09/2023--
-#Customer intent: As a developer, I want to understand X.509 Public Key Infrastructure (PKI) and public key cryptography so I can use X.509 certificates to authenticate devices to an IoT hub.
--
-# Understand public key cryptography and X.509 public key infrastructure
-
-You can use X.509 certificates to authenticate devices to an Azure IoT hub. A certificate is a digital document that contains the device's public key and can be used to verify that the device is what it claims to be. X.509 certificates and certificate revocation lists (CRLs) are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). Certificates are just one part of an X.509 public key infrastructure (PKI). To understand X.509 PKI, you need to understand cryptographic algorithms, cryptographic keys, certificates, and certificate authorities (CAs):
-
-* **Algorithms** define how original plaintext data is transformed into ciphertext and back to plaintext.
-* **Keys** are random or pseudorandom data strings used as input to an algorithm.
-* **Certificates** are digital documents that contain an entity's public key and enable you to determine whether the subject of the certificate is who or what it claims to be.
-* **Certificate Authorities** attest to the authenticity of certificate subjects.
-
-You can purchase a certificate from a certificate authority (CA). You can also, for testing and development or if you're working in a self-contained environment, create a self-signed root CA. For example, if you want to test IoT Hub authentication on devices that you own, you can self-sign your root CA and use that to issue device certificates. You can also issue self-signed device certificates.
-
-Before discussing X.509 certificates in more detail and using them to authenticate devices to an IoT hub, here are the fundamental cryptography concepts on which certificates are based.
-
-## Cryptography
-
-Cryptography protects information and communications through *encryption* and *decryption*. Encryption is the process of translating plain text data (*plaintext*) into something that appears to be random and meaningless (*ciphertext*). Decryption is the process of converting ciphertext back to plaintext. Cryptography is concerned with the following objectives:
-
-* **Confidentiality**: The information can be understood by only the intended audience.
-* **Integrity**: The information can't be altered in storage or in transit.
-* **Non-repudiation**: The creator of information can't later deny that creation.
-* **Authentication**: The sender and receiver can confirm each other's identity.
-
-## Encryption
-
-The encryption process requires an algorithm and a key. The algorithm defines how data is transformed from plaintext into ciphertext and back to plaintext. A key is a random string of data used as input to the algorithm. All of the security of the process is contained in the key. Therefore, the key must be stored securely. The details of the most popular algorithms, however, are publicly available.
-
-There are two types of encryption. Symmetric encryption uses the same key for both encryption and decryption. Asymmetric encryption uses different but mathematically related keys to perform encryption and decryption.
-
-### Symmetric encryption
-
-Symmetric encryption uses the same key to encrypt plaintext into ciphertext and decrypt ciphertext back into plaintext. The necessary length of the key, expressed in number of bits, is determined by the algorithm. After the key is used to encrypt plaintext, the encrypted message is sent to the recipient who then decrypts the ciphertext. The symmetric key must be securely transmitted to the recipient. Sending the key is the greatest security risk when using a symmetric algorithm.
--
-### Asymmetric encryption
-
-If only symmetric encryption is used, the problem is that all parties to the communication must possess the private key. However, it's possible that unauthorized third parties can capture the key during transmission to authorized users. To address this issue, you can use asymmetric or public key cryptography instead.
-
-In asymmetric cryptography, every user has two mathematically related keys called a key pair. One key is public and the other key is private. The key pair ensures that only the recipient has access to the private key needed to decrypt the data. The following illustration summarizes the asymmetric encryption process.
--
-1. The recipient creates a public-private key pair and sends the public key to a CA. The CA packages the public key in an X.509 certificate.
-
-1. The sending party obtains the recipient's public key from the CA.
-
-1. The sender encrypts plaintext data using an encryption algorithm. The recipient's public key is used to perform encryption.
-
-1. The sender transmits the ciphertext to the recipient. It isn't necessary to send the key because the recipient already has the private key needed to decrypt the ciphertext.
-
-1. The recipient decrypts the ciphertext by using the specified asymmetric algorithm and the private key.
-
-### Combining symmetric and asymmetric encryption
-
-Symmetric and asymmetric encryption can be combined to take advantage of their relative strengths. Symmetric encryption is much faster than asymmetric encryption, but, because of the necessity of sending private keys to other parties, it isn't as secure. To combine the two types together, symmetric encryption can be used to convert plaintext to ciphertext. Asymmetric encryption is used to exchange the symmetric key. This process is demonstrated by the following diagram.
--
-1. The sender retrieves the recipient's public key.
-
-1. The sender generates a symmetric key and uses it to encrypt the original data.
-
-1. The sender uses the recipient's public key to encrypt the symmetric key.
-
-1. The sender transmits the encrypted symmetric key and the ciphertext to the intended recipient.
-
-1. The recipient uses the private key that matches the recipient's public key to decrypt the sender's symmetric key.
-
-1. The recipient uses the symmetric key to decrypt the ciphertext.
-
-### Asymmetric signing
-
-Asymmetric algorithms can be used to protect data from modification and prove the identity of the data creator. The following illustration shows how asymmetric signing helps prove the sender's identity.
--
-1. The sender passes plaintext data through an asymmetric encryption algorithm, using the private key for encryption. Notice that this scenario reverses use of the private and public keys outlined in the preceding section, [Asymmetric encryption](#asymmetric-encryption).
-
-1. The resulting ciphertext is sent to the recipient.
-
-1. The recipient obtains the originator's public key from a directory.
-
-1. The recipient decrypts the ciphertext by using the originator's public key. The resulting plaintext proves the originator's identity because only the originator has access to the private key that initially encrypted the original text.
-
-## Signing
-
-Digital signing can be used to determine whether the data has been modified in transit or at rest. The data is passed through a hash algorithm, a one-way function that produces a mathematical result from the given message. The result is called a *hash value*, *message digest*, *digest*, *signature*, *fingerprint*, or *thumbprint*. A hash value can't be reversed to obtain the original message. Because a small change in the message results in a significant change in the *thumbprint*, the hash value can be used to determine whether a message has been altered. The following illustration shows how asymmetric encryption and hash algorithms can be used to verify that a message hasn't been modified.
--
-1. The sender creates a plaintext message.
-
-1. The sender hashes the plaintext message to create a message digest.
-
-1. The sender encrypts the digest using a private key.
-
-1. The sender transmits the plaintext message and the encrypted digest to the intended recipient.
-
-1. The recipient decrypts the digest by using the sender's public key.
-
-1. The recipient runs the same hash algorithm that the sender used over the message.
-
-1. The recipient compares the resulting signature to the decrypted signature. If the digests are the same, the message wasn't modified during transmission.
-
-## Next steps
-
-To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md).
-
-If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles:
-
-* [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md)
-* If you want to use self-signed certificates for testing, see the [Create a self-signed certificate](reference-x509-certificates.md#create-a-self-signed-certificate) section of [X.509 certificates](reference-x509-certificates.md).
-
- >[!IMPORTANT]
- >We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
-
-If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
iot-hub Iot Hub X509ca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md
- Title: Overview of Azure IoT Hub X.509 CA security
-description: Overview - how to authenticate devices to IoT Hub using X.509 Certificate Authorities.
----- Previously updated : 07/14/2022----
-# Authenticate devices using X.509 CA certificates
-
-This article describes how to use X.509 certificate authority (CA) certificates to authenticate devices connecting to IoT Hub. In this article you will learn:
-
-* How to get an X.509 CA certificate
-* How to register the X.509 CA certificate to IoT Hub
-* How to sign devices using X.509 CA certificates
-* How devices signed with X.509 CA are authenticated
--
-The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process and supply chain logistics during device manufacturing. If you aren't familiar with X.509 CA certificates, see [Understand how X.509 CA certificates are used in the IoT industry](iot-hub-x509ca-concept.md) for more information.
-
-## Get an X.509 CA certificate
-
-The X.509 CA certificate is at the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it.
-
-For production environments, we recommend that you purchase an X.509 CA certificate from a professional certificate services provider. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services.
-
-You may also create a self-signed X.509 CA certificate for testing purposes. For more information about creating certificates for testing, see [Create and upload certificates for testing](tutorial-x509-test-certs.md).
-
->[!NOTE]
->We do not recommend the use of self-signed certificates for production environments.
-
-Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication.
-
-## Sign devices into the certificate chain of trust
-
-The owner of an X.509 CA certificate can cryptographically sign an intermediate CA that can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device certificate. The result is a cascaded chain of certificates known as a *certificate chain of trust*. In real life this plays out as delegation of trust towards signing devices. This delegation is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys.
-
-![Diagram that shows the certificates in a chain of trust.](./media/generic-cert-chain-of-trust.png)
-
-The device certificate (also called a leaf certificate) must have the *subject name* set to the **device ID** (`CN=deviceId`) that was used when registering the IoT device in Azure IoT Hub. This setting is required for authentication.
-
-Learn how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) as done when signing devices.
-
-## Register the X.509 CA certificate to IoT Hub
-
-Register your X.509 CA certificate to IoT Hub, which uses it to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession.
-
-The upload process entails uploading a file that contains your certificate. This file should never contain any private keys.
-
-The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. You can choose to either automatically or manually verify ownership. For manual verification, Azure IoT Hub generates a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step and manually verify your certificate by uploading a file containing the results.
-
-Learn how to [register your CA certificate](tutorial-x509-test-certs.md#register-your-subordinate-ca-certificate-to-your-iot-hub).
-
-## Create a device on IoT Hub
-
-To prevent device impersonation, IoT Hub requires that you let it know what devices to expect. You do this by creating a device entry in the IoT hub's device registry. This process is automated when using [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md).
-
-Learn how to [manually create a device in IoT Hub](./iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-
-## Authenticate devices signed with X.509 CA certificates
-
-With your X.509 CA certificate registered and devices signed into a certificate chain of trust, the final step is device authentication when the device connects. When an X.509 CA-signed device connects, it uploads its certificate chain for validation. The chain includes all intermediate CA and device certificates. With this information, IoT Hub authenticates the device in a two-step process. IoT Hub cryptographically validates the certificate chain for internal consistency, and then issues a proof-of-possession challenge to the device. IoT Hub declares the device authentic on a successful proof-of-possession response from the device. This declaration assumes that the device's private key is protected and that only the device can successfully respond to this challenge. We recommend using secure chips like Hardware Secure Modules (HSM) in devices to protect private keys.
-
-A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
-
-## Revoke a device certificate
-
-IoT Hub doesn't check certificate revocation lists from the certificate authority when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub).
-
-## Next Steps
-
-Learn about [the value of X.509 CA authentication](iot-hub-x509ca-concept.md) in IoT.
-
-Get started with [IoT Hub Device Provisioning Service](../iot-dps/index.yml).
iot Howto Use Iot Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-use-iot-explorer.md
Title: Install and use Azure IoT explorer | Microsoft Docs
description: Install the Azure IoT explorer tool and use it to interact with IoT Plug and Play devices connected to IoT hub. Although this article focuses on working with IoT Plug and Play devices, you can use the tool with any device connected to your hub. Previously updated : 06/14/2022 Last updated : 09/29/2023
On the **Component** page, you can view the read-only properties, update writabl
You can view the read-only properties defined in an interface on the **Properties (read-only)** tab. You can update the writable properties defined in an interface on the **Properties (writable)** tab: 1. Go to the **Properties (writable)** tab.
-1. Click the property you'd like to update.
+1. Select the property you'd like to update.
1. Enter the new value for the property. 1. Preview the payload to be sent to the device. 1. Submit the change.
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
To create a Linux VM using the Azure CLI, use the [az vm create](/cli/azure/vm)
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
To create a Linux VM using the Azure CLI, use the [az vm create](/cli/azure/vm)
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
lab-services Class Type Arcgis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md
The steps in this section show how to set up the template VM:
3. Set up external backup storage for students. Students can save files directly to their assigned VM since all changes that they make are saved across sessions. However, we recommend that students back up their work to storage that is external from their VM for a few reasons: - To enable students to access their work after the class and lab ends.
- - In case the student gets their VM into a bad state and their image needs to be [reset](how-to-manage-vm-pool.md#reset-lab-vms).
+ - In case the student gets their VM into a bad state and their image needs to be [reimaged](how-to-manage-vm-pool.md#reimage-lab-vms).
With ArcGIS, each student should back up the following files at the end of each work session:
lab-services Classroom Labs Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md
The following table shows the corresponding mapping of organization roles to Azu
| Org. role | Azure AD role | Description | | | | | | Administrator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. |
-| | Lab Operator | Optionally, assign to other administrator to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
| Educator | Lab Creator | Create and manage the labs they created. | | | Lab Contributor | Optionally, assign to an educator to create and manage all labs (when assigned at the resource group level). |
-| | Lab Operator | Optionally, assign to other educators to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
-| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |
+| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reimage/start/stop/connect lab VMs. |
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
The following table shows the corresponding mapping of organization roles to Azu
| Org. role | Azure AD role | Description | | | | | | Administrator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. |
-| | Lab Operator | Optionally, assign to other administrator to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
-| Educator | Lab Operator | Manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
-| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |
+| Educator | - Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reimage/start/stop/connect lab VMs. |
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
The following table shows the corresponding mapping of organization roles to Azu
| Org. role | Azure AD role | Description | | | | | | Educator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. As an Owner, you can also fully manage all labs. |
-| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |
+| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reimage/start/stop/connect lab VMs. |
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
lab-services Concept Lab Accounts Versus Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-accounts-versus-lab-plans.md
+
+ Title: Lab accounts versus lab plans
+
+description: Learn about the differences between lab accounts and lab plans in Azure Lab Services. Lab plans replace lab accounts and have some fundamental differences.
+++++ Last updated : 08/07/2023++
+# Lab accounts versus lab plans in Azure Lab Services
+
+In Azure Lab Services, lab plans replace lab accounts and there some fundamental differences between the two concepts. In this article, you get an overview of the changes that come with lab plans and how lab plans are different from lab accounts. Lab plans bring improvements in performance, reliability, and scalability. Lab plans also give you more flexibility for managing labs, using capacity, and tracking costs.
++
+## Overview
+
+Lab plans replace lab accounts and although they come with key new features, they share many familiar concepts. Lab plans, similar to lab accounts, serve as the collection of configurations and settings for creating labs. For example, to configure image galleries, shutdown settings, management of lab users, or to specify advanced networking settings.
+
+Lab plans also have fundamental differences. For example, labs created with lab plans are now an Azure resource in their own right, which makes them a sibling resource to lab plans.
+
+By using lab plans, you can unlock several new capabilities:
+
+**[Canvas Integration](how-to-configure-canvas-for-lab-plans.md)**. If your organization is using Canvas, educators no longer have to leave Canvas to create labs with Azure Lab Services. Students can connect to their virtual machine from inside their course in Canvas.
+
+**[Per-customer assigned capacity](capacity-limits.md#per-customer-assigned-capacity)**. You don't have to share capacity with others anymore. If your organization has requested more quota, Azure Lab Services allocates it just for you.
+
+**[Advanced networking](how-to-connect-vnet-injection.md)**. Advanced networking with virtual network injection replaces virtual network peering. In your Azure subscription, you can create a virtual network in the same region as the lab plan, and delegate a subnet to Azure Lab Services.
+
+**[Improved auto-shutdown](how-to-configure-auto-shutdown-lab-plans.md)**. Auto-shutdown settings are now available for Windows and Linux operating systems. Learn more about the [supported Linux distributions](./how-to-enable-shutdown-disconnect.md#supported-linux-distributions-for-automatic-shutdown).
+
+**[More built-in roles](./concept-lab-services-role-based-access-control.md)**. In addition to the Lab Creator built-in role, there are now more lab management roles, such as Lab Assistant. Learn more about [role-based access control in Azure Lab Services](./concept-lab-services-role-based-access-control.md).
+
+**[Improved cost tracking in Microsoft Cost Management](cost-management-guide.md#separate-the-costs)**. Lab virtual machines are now the cost unit tracked in Microsoft Cost Management. Tags for lab plan ID and lab name are automatically added to each cost entry. If you want to track the cost of a single lab, group the lab VM cost entries together by the lab name tag. Custom tags on labs also propagate to Microsoft Cost Management entries to allow further cost analysis.
+
+**[Updates to lab owner experience](how-to-manage-labs.md)**. Choose to skip the template creation process when creating a new lab if you already have an image ready to use. In addition, you can add a non-admin user to lab VMs.
+
+**[Updates to lab user experience](how-to-manage-vm-pool.md#redeploy-lab-vms)**. In addition to reimaging their lab VM, lab users can now also redeploy their lab VM without losing the data inside the lab VM. In addition, the lab registration experience is simplified when you use labs in Teams, Canvas, or with Azure AD groups. In these cases, Azure Lab Services *automatically* assigns a lab VM to a lab user.
+
+**SDKs**. Azure Lab Services is now integrated with the [Az PowerShell module](/powershell/azure/release-notes-azureps) and supports Azure Resource Manager (ARM) templates. Also, you can use either the [.NET SDK](/dotnet/api/overview/azure/labservices) or [Python SDK](https://pypi.org/project/azure-mgmt-labservices/).
+
+## Difference between lab plans and lab accounts
+
+Lab plans replace lab accounts in Azure Lab Services. The following table lists the fundamental differences between lab plans and lab accounts:
+
+|Lab account|Lab plan|
+|-|-|
+|Lab account was the only resource that administrators could interact with inside the Azure portal.|Administrators can now manage two types of resources, lab plan and lab, in the Azure portal.|
+|Lab account served as the **parent** for the labs.|Lab plan is a **sibling** resource to the lab resource. Grouping of labs is now done by the resource group.|
+|Lab account served as a container for the labs. A change to the lab account often affected the labs under it.|The lab plan serves as a collection of configurations and settings that are applied when a lab is **created**. If you change a lab plan's settings, these changes won't affect any existing labs that were previously created from the lab plan. (The exception is the internal help information, which will affect all labs.)|
+
+Lab accounts and labs have a parental relationship. Moving to a sibling relationship between the lab plan and lab provides an upgraded experience. The following table compares the previous experience with a lab account and the new improved experience with a lab plan.
+
+|Feature/area|Lab account|Lab plan|
+|-|-|-|
+|Resource Management|Lab account was the only resource tracked in the Azure portal. All other resources were child resources of the lab account and tracked in Lab Services directly.|Lab plans and labs are now sibling resources in Azure. Administrators can use existing tools in the Azure portal to manage labs. Virtual machines will continue to be a child resource of labs.|
+|Cost tracking|In Microsoft Cost Management, admins could only track and analyze cost at the service level and at the lab account level.| Cost entries in Microsoft Cost Management are now for lab virtual machines. Automatic tags on each entry specify the lab plan ID and the lab name. You can analyze cost by lab plan, lab, or virtual machine from within the Azure portal. Custom tags on the lab will also show in the cost data.|
+|Selecting regions|By default, labs were created in the same geography as the lab account. A geography typically aligns with a country/region and contains one or more Azure regions. Lab owners weren't able to manage exactly which Azure region the labs resided in.|In the lab plan, administrators now can manage the exact Azure regions allowed for lab creation. By default, labs will be created in the same Azure region as the lab plan. </br> Note, when a lab plan has advanced networking enabled, labs are created in the same Azure region as virtual network.|
+|Deletion experience|When a lab account is deleted, all labs within it are also deleted.|When deleting a lab plan, labs *aren't* deleted. After a lab plan is deleted, labs will keep references to their virtual network even if advanced networking is enabled. However, if a lab plan was connected to an Azure Compute Gallery, the labs can no longer export an image to that Azure Compute Gallery.|
+|Connecting to a virtual network|The lab account provided an option to peer to a virtual network. If you already had labs in the lab account before you peered to a virtual network, the virtual network connection didn't apply to existing labs. Admins couldn't tell which labs in the lab account were peered to the virtual network.|In a lab plan, admins set up the advanced networking only at the time of lab plan creation. Once a lab plan is created, you'll see a read-only connection to the virtual network. If you need to use another virtual network, create a new lab plan configured with the new virtual network.|
+|Labs portal experience|Labs are listed under lab accounts in [https://labs.azure.com](https://labs.azure.com).|Labs are listed under resource group name in [https://labs.azure.com](https://labs.azure.com). If there are multiple lab plans in the same resource group, educators can choose which lab plan to use when creating the lab. <br/>Learn more about [resource group and lab plan structure](./concept-lab-services-role-based-access-control.md#resource-group-and-lab-plan-structure).|
+|Permissions needed to manage labs|To create a lab:</br>- **Lab Contributor** role on the lab account.<br/></br>To modify an existing lab:</br>- **Reader** role on the lab account.</br>- **Owner** or **Contributor** role on the lab (Lab creators are assigned the **Owner** role to any labs they create). | To create a lab:</br>- **Owner** or **Contributor** role on the resource group that contains the lab plan.</br>- **Lab Creator** role on the lab plan.</br><br/>To modify an existing lab:</br>- **Owner** or **Contributor** role on the lab (Lab creators are assigned the **Owner** role to any labs they create).<br/><br/>Learn more about [Azure Lab Services role-based access control](./concept-lab-services-role-based-access-control.md). |
+
+## Known issues
+
+- When using virtual network injection, use caution in making changes to the virtual network, subnet, and resources created by Lab Services attached to the subnet. Also, labs using advanced networking must be deleted before deleting the virtual network.
+
+- Moving lab plan and lab resources from one Azure region to another isn't supported.
+
+- You have to register the [Azure Compute resource provider](../azure-resource-manager/management/resource-providers-and-types.md) before Azure Lab Services can [create and attach an Azure Compute Gallery resource](how-to-attach-detach-shared-image-gallery.md#attach-an-existing-compute-gallery-to-a-lab-plan).
+
+- If you're attaching an Azure compute gallery, the compute gallery and the lab plan must be in the same Azure region. Also, it's recommended that the [enabled regions](./create-and-configure-labs-admin.md#enable-regions) only has this Azure region selected.
+
+## Next steps
+
+If you're using lab accounts, follow these steps to [migrate your lab accounts to lab plans](./migrate-to-2022-update.md).
+
+If you're new to Azure Lab Services, get started by [creating a new lab plan](./quick-create-resources.md).
lab-services Concept Lab Services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-role-based-access-control.md
The following are the built-in roles supported by Azure Lab
| Administrator | Lab Services Contributor | Grant the same permissions as the Owner role, except for assigning roles. Learn more about the [Lab Services Contributor role](#lab-services-contributor-role). | | Lab management | Lab Creator | Grant permission to create labs and have full control over the labs that they create. Learn more about the [Lab Creator role](#lab-creator-role). | | Lab management | Lab Contributor | Grant permission to help manage an existing lab, but not create new labs. Learn more about the [Lab Contributor role](#lab-contributor-role). |
-| Lab management | Lab Assistant | Grant permission to view an existing lab. Can also start, stop, or reset any VM in the lab. Learn more about the [Lab Assistant role](#lab-assistant-role). |
+| Lab management | Lab Assistant | Grant permission to view an existing lab. Can also start, stop, or reimage any VM in the lab. Learn more about the [Lab Assistant role](#lab-assistant-role). |
| Lab management | Lab Services Reader | Grant permission to view existing labs. Learn more about the [Lab Services Reader role](#lab-services-reader-role). | ## Role assignment scope
The following table shows common lab activities and the role that's needed for a
| Grant permission to create or manage your own labs for *all* lab plans within a resource group. | Lab management | [Lab Creator](#lab-creator-role) | Resource group | | Grant permission to create or manage your own labs for a specific lab plan. | Lab management | [Lab Creator](#lab-creator-role) | Lab plan | | Grant permission to co-manage a lab, but *not* the ability to create labs. | Lab management | [Lab Contributor](#lab-contributor-role) | Lab |
-| Grant permission to only start/stop/reset VMs for *all* labs within a resource group. | Lab management | [Lab Assistant](#lab-assistant-role) | Resource group |
-| Grant permission to only start/stop/reset VMs for a specific lab. | Lab management | [Lab Assistant](#lab-assistant-role) | Lab |
+| Grant permission to only start/stop/reimage VMs for *all* labs within a resource group. | Lab management | [Lab Assistant](#lab-assistant-role) | Resource group |
+| Grant permission to only start/stop/reimage VMs for a specific lab. | Lab management | [Lab Assistant](#lab-assistant-role) | Lab |
> [!IMPORTANT] > An organizationΓÇÖs subscription is used to manage billing and security for all Azure resources and services. You can assign the Owner or Contributor role on the [subscription](./administrator-guide.md#subscription). Typically, only administrators have subscription-level access because this includes full access to all resources in the subscription.
When you assign the Lab Contributor role on the lab, the user can manage the ass
### Lab Assistant role
-Assign the Lab Assistant role to grant a user permission to view a lab, and start, stop, and reset lab virtual machines for the lab.
+Assign the Lab Assistant role to grant a user permission to view a lab, and start, stop, and reimage lab virtual machines for the lab.
Assign the Lab Assistant role on the *resource group or lab*.
Assign the Lab Assistant role on the *resource group or lab*.
When you assign the Lab Assistant role on the resource group, the user: -- Can view all labs within the resource group and start, stop, or reset lab virtual machines for each lab.
+- Can view all labs within the resource group and start, stop, or reimage lab virtual machines for each lab.
- CanΓÇÖt delete or make any other changes to the labs. When you assign the Lab Assistant role on the lab, the user: -- Can view the assigned lab and start, stop, or reset lab virtual machines.
+- Can view the assigned lab and start, stop, or reimage lab virtual machines.
- CanΓÇÖt delete or make any other changes to the lab. - CanΓÇÖt create new labs.
lab-services Concept Lab Services Supported Networking Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-supported-networking-scenarios.md
The following table lists common networking scenarios and topologies and their s
| Enable distant license server, such as on-premises, cross-region | Yes | Add a [user defined route (UDR)](/azure/virtual-network/virtual-networks-udr-overview) that points to the license server.<br/><br/>If the lab software requires connecting to the license server by its name instead of the IP address, you need to [configure a customer-provided DNS server](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances?tabs=redhat#name-resolution-that-uses-your-own-dns-server) or add an entry to the `hosts` file in the lab template.<br/><br/>If multiple services need access to the license server, using them from multiple regions, or if the license server is part of other infrastructure, you can use the [hub-and-spoke Azure networking best practice](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). | | Access to on-premises resources, such as a license server | Yes | You can access on-premises resources with these options: <br/>- Configure [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) or create a [site-to-site VPN connection](/azure/vpn-gateway/tutorial-site-to-site-portal) (bridge the networks).<br/>- Add a public IP to your on-premises server with a firewall that only allows incoming connections from Azure Lab Services.<br/><br/>In addition, to reach the on-premises resources from the lab VMs, add a [user defined route (UDR)](/azure/virtual-network/virtual-networks-udr-overview). | | Use a [hub-and-spoke networking model](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology) | Yes | This scenario works as expected with lab plans and advanced networking. <br/><br/>A number of configuration changes aren't supported with Azure Lab Services, such as adding a default route on a route table. Learn about the [unsupported virtual network configuration changes](./how-to-connect-vnet-injection.md#4-optional-update-the-networking-configuration-settings). |
-| Access lab VMs by private IP address (private-only labs) | Not recommended | This scenario is functional, but makes it difficult for lab users to connect to their lab VM. In the Azure Lab Services website, lab users can't identify the private IP address of their lab VM. In addition, the connect button points to the public endpoint of the lab VM. The lab creator needs to provide lab users with the private IP address of their lab VMs. After a VM reset, this private IP address might change.<br/><br/>If you implement this scenario, don't delete the public IP address or load balancer associated with the lab. If those resources are deleted, the lab fails to scale or publish. |
+| Access lab VMs by private IP address (private-only labs) | Not recommended | This scenario is functional, but makes it difficult for lab users to connect to their lab VM. In the Azure Lab Services website, lab users can't identify the private IP address of their lab VM. In addition, the connect button points to the public endpoint of the lab VM. The lab creator needs to provide lab users with the private IP address of their lab VMs. After a VM reimage, this private IP address might change.<br/><br/>If you implement this scenario, don't delete the public IP address or load balancer associated with the lab. If those resources are deleted, the lab fails to scale or publish. |
| Protect on-premises resources with a firewall | Yes | Putting a firewall between the lab VMs and a specific resource is supported. | | Put lab VMs behind a firewall. For example for content filtering, security, and more. | No | The typical firewall setup doesn't work with Azure Lab Services, unless when connecting to lab VMs by private IP address (see previous scenario).<br/><br/>When you set up the firewall, a default route is added on the route table for the subnet. This default route introduces an asymmetric routing problem, which breaks the RDP/SSH connections to the lab. | | Use third party over-the-shoulder monitoring software | Yes | This scenario is supported with advanced networking for lab plans. |
lab-services How To Access Lab Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-access-lab-virtual-machine.md
In addition, you can also perform specific actions on the lab VM:
- Start or stop the lab VM: learn more about [starting and stopping a lab VM](#start-or-stop-the-lab-vm). - Connect to the lab VM: select the computer icon to connect to the lab VM with remote desktop or SSH. Learn more about [connecting to the lab VM](./connect-virtual-machine.md).-- Reset or troubleshoot the lab VM: learn more how you [reset or troubleshoot the lab VM](./how-to-reset-and-redeploy-vm.md) when you experience problems.
+- Redeploy or reimage the lab VM: learn more how you [redeploy or reimage the lab VM](./how-to-reset-and-redeploy-vm.md) when you experience problems.
## View quota hours
Learn more about how to [connect to a lab VM](connect-virtual-machine.md).
## Next steps - Learn how to [change your lab VM password](./how-to-set-virtual-machine-passwords-student.md)-- Learn how to [reset or troubleshoot your lab VM](./how-to-reset-and-redeploy-vm.md)
+- Learn how to [redeploy or reimage your lab VM](./how-to-reset-and-redeploy-vm.md)
- Learn about [key concepts in Azure Lab Services](./classroom-labs-concepts.md), such as quota hours or lab schedules.
lab-services How To Manage Lab Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-users.md
To view the list of lab users that have already registered for the lab by using
The list shows the list of lab users with their registration status. The user status should show **Registered**, and their name should also be available after registration. > [!NOTE]
- > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reset VMs](how-to-manage-vm-pool.md#reset-lab-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image.
+ > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reimage VMs](how-to-manage-vm-pool.md#reimage-lab-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image.
# [Azure AD group](#tab/aad)
lab-services