Updates from: 09/30/2023 01:13:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 09/01/2023 Last updated : 09/29/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## September 2023
+
+This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+
+### Updated articles
+
+- [Supported Microsoft Entra features](supported-azure-ad-features.md) - Editorial updates
+- [Publish your Azure Active Directory B2C app to the Microsoft Entra app gallery](publish-app-to-azure-ad-app-gallery.md) - Editorial updates
+- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - Editorial updates
+- [Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml) - Editorial updates
+- [Define an ID token hint technical profile in an Azure Active Directory B2C custom policy](id-token-hint.md) - Editorial updates
+- [Set up sign-in for multi-tenant Microsoft Entra ID using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md) - Editorial updates
+- [Set up sign-in for a specific Microsoft Entra organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md) - Editorial updates
+- [Localization string IDs](localization-string-ids.md) - Editorial updates
+- [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates
+- [Define a Microsoft Entra SSPR technical profile in an Azure AD B2C custom policy](aad-sspr-technical-profile.md) - Editorial updates
+- [Define a Microsoft Entra technical profile in an Azure Active Directory B2C custom policy](active-directory-technical-profile.md) - Editorial updates
+- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md) - Editorial updates
+- [Billing model for Azure Active Directory B2C](billing.md) - Editorial updates
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md) - Editorial updates
+- [Set up a sign-up and sign-in flow with a social account by using Azure Active Directory B2C custom policy](custom-policies-series-sign-up-or-sign-in-federation.md) - Editorial updates
+- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates
+ ## August 2023 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
### Updated articles - [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md) - [Azure AD B2C] Azure AD B2C Go-Local opt-in feature-- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](configure-security-analytics-sentinel.md) - Removing product name from filename and links. -- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-web-application-firewall.md) - Removing product name from filename and links. -- [Title not found in: #240919](./external-identities-videos.md) - Delete azure-ad-external-identities-videos.md-- [Build a global identity solution with funnel-based approach](b2c-global-identity-funnel-based-design.md) - Removing product name from filename and links.-- [Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration](b2c-global-identity-proof-of-concept-funnel.md) - Removing product name from filename and links. -- [Azure Active Directory B2C global identity framework proof of concept for region-based configuration](b2c-global-identity-proof-of-concept-regional.md) - Removing product name from filename and links.-- [Build a global identity solution with region-based approach](b2c-global-identity-region-based-design.md) - Removing product name from filename and links. -- [Azure Active Directory B2C global identity framework](b2c-global-identity-solutions.md) - Removing product name from filename and links. -- [Azure Active Directory B2C: What's new](whats-new-docs.md) - [Azure AD B2C] What is new May 2023
+- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](configure-security-analytics-sentinel.md) - Removing product name from filename and links
+- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-web-application-firewall.md) - Removing product name from filename and links
+- [Build a global identity solution with funnel-based approach](b2c-global-identity-funnel-based-design.md) - Removing product name from filename and links
+- [Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration](b2c-global-identity-proof-of-concept-funnel.md) - Removing product name from filename and links
+- [Azure Active Directory B2C global identity framework proof of concept for region-based configuration](b2c-global-identity-proof-of-concept-regional.md) - Removing product name from filename and links
+- [Build a global identity solution with region-based approach](b2c-global-identity-region-based-design.md) - Removing product name from filename and links
+- [Azure Active Directory B2C global identity framework](b2c-global-identity-solutions.md) - Removing product name from filename and links
- [Use the Azure portal to create and delete consumer users in Azure AD B2C](manage-users-portal.md) - [Azure AD B2C] Revoke user's session - [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md) - Added steps to disable Azure monitor
-## May 2023
-
-### New articles
--- [How to secure your Azure Active Directory B2C identity solution](security-architecture.md)-
-### Updated articles
--- [Configure Azure Active Directory B2C with Akamai Web Application Protector](partner-akamai.md)-- [Configure Asignio with Azure Active Directory B2C for multifactor authentication](partner-asignio.md)-- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)-- [Configure WhoIAM Rampart with Azure Active Directory B2C](partner-whoiam-rampart.md)-- [Build a global identity solution with funnel-based approach](./b2c-global-identity-funnel-based-design.md)-- [Use the Azure portal to create and delete consumer users in Azure AD B2C](manage-users-portal.md)
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 09/15/2023 Last updated : 09/29/2023 -+
The following table lists each setting that can be set to Microsoft managed and
| Setting | Configuration | |-||
-| [Registration campaign](how-to-mfa-registration-campaign.md) | From Sept 25 to Oct 20, 2023, the Microsoft managed value for the registration campaign will change to Enabled for text message and voice call users across all tenants. |
+| [Registration campaign](how-to-mfa-registration-campaign.md) | From Sept. 25 to Oct. 20, 2023, the Microsoft managed value for the registration campaign will change to Enabled for text message and voice call users across all tenants. |
| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled |
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
Previously updated : 09/15/2023 Last updated : 09/24/2023
Only the [converged registration experience](concept-registration-mfa-sspr-combi
Two other policies, located in **Multifactor authentication** settings and **Password reset** settings, provide a legacy way to manage some authentication methods for all users in the tenant. You can't control who uses an enabled authentication method, or how the method can be used. A [Global Administrator](../roles/permissions-reference.md#global-administrator) is needed to manage these policies. >[!Important]
->In March 2023, we announced the deprecation of managing authentication methods in the legacy multifactor authentication and self-service password reset (SSPR) policies. Beginning September 30, 2024, authentication methods can't be managed in these legacy MFA and SSPR policies. We recommend customers use the manual migration control to migrate to the Authentication methods policy by the deprecation date.
+>In March 2023, we announced the deprecation of managing authentication methods in the legacy multifactor authentication and self-service password reset (SSPR) policies. Beginning September 30, 2025, authentication methods can't be managed in these legacy MFA and SSPR policies. We recommend customers use the manual migration control to migrate to the Authentication methods policy by the deprecation date.
To manage the legacy MFA policy, select **Security** > **Multifactor authentication** > **Additional cloud-based multifactor authentication settings**.
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
Previously updated : 09/13/2023 Last updated : 09/24/2023
After you capture available authentication methods from the policies you're curr
You'll want to set this option before you make any changes as it will apply your new policy to both sign-in and password reset scenarios. The next step is to update the Authentication methods policy to match your audit. You'll want to review each method one-by-one. If your tenant is only using the legacy MFA policy, and isn't using SSPR, the update is straightforward - you can enable each method for all users and precisely match your existing policy.
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
Microsoft Entra ID combines the encrypted client key and message buffer into the
| tgt_key_type | int | The on-premises AD DS key type used for both the client key and the Kerberos session key included in the KERB_MESSAGE_BUFFER. | | tgt_message_buffer | string | Base64 encoded KERB_MESSAGE_BUFFER. |
+### Do users need to be a member of the Domain Users Active Directory group?
+Yes. A user must be in the Domain Users group to be able to sign-in using Azure AD Kerberos.
+ ## Next steps To get started with FIDO2 security keys and hybrid access to on-premises resources, see the following articles:
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
First, let's enable SMS-based authentication for your Microsoft Entra tenant.
1. Click **Enable** and select **Target users**. You can choose to enable SMS-based authentication for *All users* or *Select users* and groups.
+ > [!NOTE]
+ > To configure SMS-based authentication for first-factor (that is, to allow users to sign in with this method), check the **Use for sign-in** checkbox. Leaving this unchecked makes SMS-based authentication available for multifactor authentication and Self-Service Password Reset only.
![Enable SMS authentication in the authentication method policy window](./media/howto-authentication-sms-signin/enable-sms-authentication-method.png)
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS70016 | AuthorizationPending - OAuth 2.0 device flow error. Authorization is pending. The device will retry polling the request. | | AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization isn't approved. | | AADSTS70019 | CodeExpired - Verification code expired. Have the user retry the sign-in. |
-| AADSTS70043 | The refresh token has expired or is invalid due to sign-in frequency checks by Conditional Access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. |
+| AADSTS70043 | BadTokenDueToSignInFrequency - The refresh token has expired or is invalid due to sign-in frequency checks by Conditional Access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. |
| AADSTS75001 | BindingSerializationError - An error occurred during SAML message binding. | | AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response can't be sent via bindings other than HTTP POST). | | AADSTS75005 | Saml2MessageInvalid - Microsoft Entra doesnΓÇÖt support the SAML request sent by the app for SSO. To learn more, see the troubleshooting article for error [AADSTS75005](/troubleshoot/azure/active-directory/error-code-aadsts75005-not-a-valid-saml-request). |
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
az group create --name AzureADLinuxVM --location southcentralus
az vm create \ --resource-group AzureADLinuxVM \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--assign-identity \ --admin-username azureuser \ --generate-ssh-keys
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md
Title: "What's new in Azure Active Directory for customers" description: "New and updated documentation for the Azure Active Directory for customers documentation." Previously updated : 09/01/2023 Last updated : 09/29/2023
Welcome to what's new in Azure Active Directory for customers documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## September 2023
+
+This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+
+### Updated articles
+
+- [Quickstart: Get started with our guide to run a sample app and sign in your users (preview)](quickstart-get-started-guide.md) Start the guide updates
+- [Manage Microsoft Entra ID for customers resources with Microsoft Graph](microsoft-graph-operations.md) - Editorial updates
+- [Planning for customer identity and access management (preview)](concept-planning-your-solution.md) - Editorial updates
+- [Create a sign-up and sign-in user flow for customers](how-to-user-flow-sign-up-sign-in-customers.md) - Disable sign-up in a user flow
+ ## August 2023 ### New articles
Welcome to what's new in Azure Active Directory for customers documentation. Thi
- [Tutorial: Call a web API from your Node.js daemon application](tutorial-daemon-node-call-api-build-app.md) - Editorial review - [Tutorial: Sign in users to your .NET browserless application](tutorial-browserless-app-dotnet-sign-in-build-app.md) - Editorial review
-## June 2023
-
-### New articles
--- [Quickstart: Create a tenant (preview)](quickstart-tenant-setup.md)-- [Tutorial: Create a .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-prepare-app.md)-- [Tutorial: Register and configure .NET MAUI mobile app in a customer tenant](tutorial-mobile-app-maui-sign-in-prepare-tenant.md)-- [Tutorial: Sign in users in .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-sign-out.md)-- [Use role-based access control in your Node.js web application](how-to-web-app-role-based-access-control.md)-- [Tutorial: Handle authentication flows in a React single-page app](./tutorial-single-page-app-react-sign-in-configure-authentication.md)-- [Tutorial: Create a .NET MAUI app](tutorial-desktop-app-maui-sign-in-prepare-app.md)-- [Tutorial: Register and configure .NET MAUI app in a customer tenant](tutorial-desktop-app-maui-sign-in-prepare-tenant.md)-- [Tutorial: Sign in users in .NET MAUI app](tutorial-desktop-app-maui-sign-in-sign-out.md)-
-### Updated articles
--- [What is Microsoft Entra ID for customers?](overview-customers-ciam.md) - Added a section regarding Azure AD B2C to the overview and emphasized tenant creation when getting started-- [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest-- [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](./tutorial-single-page-app-react-sign-in-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](./tutorial-single-page-app-react-sign-in-sign-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling-- [Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](tutorial-single-page-app-react-sign-in-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](tutorial-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes-- [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](tutorial-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes-- [Tutorial: Add sign-in and sign-out to an ASP.NET web application for a customer tenant](tutorial-web-app-dotnet-sign-in-sign-out.md) - ASP.NET web app fixes-- [Collect user attributes during sign-up](how-to-define-custom-attributes.md) - Added a step for the Show more attributes pane and custom attributes-- [Manage Azure Active Directory for customers resources with Microsoft Graph](microsoft-graph-operations.md) - Combined Graph API references into one doc
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 09/01/2023 Last updated : 09/29/2023
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## September 2023
+
+This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+
+### Updated articles
+
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md) - Editorial updates
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md) - Editorial updates
+- [Overview of Microsoft Entra External ID](external-identities-overview.md) - Editorial updates
+- [Billing model for Microsoft Entra External ID](external-identities-pricing.md) - Editorial updates
+- [Microsoft Entra B2B collaboration FAQs](faq.yml) - Editorial updates
+- [Grant Microsoft Entra B2B users access to your on-premises applications](hybrid-cloud-to-on-premises.md) - Editorial updates
+- [Grant locally managed partner accounts access to cloud resources using Microsoft Entra B2B collaboration](hybrid-on-premises-to-cloud.md) - Editorial updates
+- [Microsoft Entra B2B collaboration for hybrid organizations](hybrid-organizations.md) - Editorial updates
+- [Microsoft Entra B2B collaboration invitation redemption](redemption-experience.md) - Editorial updates
+- [Self-service for Microsoft Entra B2B collaboration sign-up](self-service-portal.md) - Editorial updates
+- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md) - Editorial updates
+- [Set up tenant restrictions v2](tenant-restrictions-v2.md) - Feature availability updates
+- [Troubleshooting Microsoft Entra B2B collaboration](troubleshoot.md) - Editorial updates
+- [Properties of a Microsoft Entra B2B collaboration user](user-properties.md) - Editorial updates
+- [B2B collaboration overview](what-is-b2b.md) - Editorial updates
+- [Add Microsoft Entra ID as an identity provider for External ID](default-account.md) - Editorial updates
+- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md) - Editorial updates
+- [Add Microsoft Entra B2B collaboration users in the Microsoft Entra admin center](add-users-administrator.md) - Editorial updates
+- [Tutorial: Enforce multifactor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - Editorial updates
+- [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) - Editorial updates
+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - Editorial updates
+- [Add Facebook as an identity provider for External Identities](facebook-federation.md) - Editorial updates
+- [Add Google as an identity provider for B2B guest users](google-federation.md) - Editorial updates
+ ## August 2023 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Cross-tenant access overview](cross-tenant-access-overview.md) - New storage model update - [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) - New storage model update - [Configure B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - New storage model update--
+
## July 2023 ### New article
Welcome to what's new in Azure Active Directory External Identities documentatio
### Updated articles - [Bulk invite users via PowerShell](bulk-invite-powershell.md) - Editorial and link updates-- [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - Text corrections and screenshot updates
+- [Enforce multifactor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - Text corrections and screenshot updates
- [Invite internal users to B2B](invite-internal-users.md) - Text corrections and screenshot updates - [Grant B2B users access to local apps](hybrid-cloud-to-on-premises.md) - Text corrections - [Tenant restrictions V2](tenant-restrictions-v2.md) - Note update - [Leave an organization](leave-the-organization.md) - Screenshot update - [Use audit logs and access reviews](auditing-and-reporting.md) - B2B sponsors feature update
-## June 2023
-
-### Updated articles
-- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md) - Microsoft Teams updates-- [Invite guest users to an app](add-users-information-worker.md) - Link and structure updates
active-directory Concept Group Based Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-group-based-licensing.md
+
+ Title: What is group-based licensing
+description: Learn about Microsoft Entra group-based licensing, including how it works, key features, and best practices.
+
+keywords: Azure AD licensing
+++++++ Last updated : 09/28/2023+++
+# Customer intent: As an IT admin, I want to understand group-based licensing, so I can effectively assign licenses to users in my organization.
++
+# What is group-based licensing in Microsoft Entra ID?
+
+Microsoft paid cloud services, such as Microsoft 365, Enterprise Mobility + Security, Dynamics 365, and other similar products, require licenses. These licenses are assigned to each user who needs access to these services. To manage licenses, administrators use one of the management portals (Office or Azure) and PowerShell cmdlets. Microsoft Entra ID is the underlying infrastructure that supports identity management for all Microsoft cloud services. Microsoft Entra ID stores information about license assignment states for users.
+
+Microsoft Entra ID includes group-based licensing, which allows you to assign one or more product licenses to a group. Microsoft Entra ID ensures that the licenses are assigned to all members of the group. Any new members who join the group are assigned the appropriate licenses. When they leave the group, those licenses are removed. This licensing management eliminates the need for automating license management via PowerShell to reflect changes in the organization and departmental structure on a per-user basis.
+
+## Licensing requirements
+
+You must have one of the following licenses **for every user who benefits from** group-based licensing:
+
+- Paid or trial subscription for Microsoft Entra ID P1 and above
+
+- Paid or trial edition of Microsoft 365 Business Premium or Office 365 Enterprise E3 or Office 365 A3 or Office 365 GCC G3 or Office 365 E3 for GCCH or Office 365 E3 for DOD and above
+
+### Required number of licenses
+
+For any groups assigned a license, you must also have a license for each unique member. While you don't have to assign each member of the group a license, you must have at least enough licenses to include all of the members. For example, if you have 1,000 unique members who are part of licensed groups in your tenant, you must have at least 1,000 licenses to meet the licensing agreement.
+
+## Features
+
+Here are the main features of group-based licensing:
+
+- Licenses can be assigned to any security group in Microsoft Entra ID. Security groups can be synced from on-premises, by using [Microsoft Entra Connect](../hybrid/connect/whatis-azure-ad-connect.md). You can also create security groups directly in Microsoft Entra ID (also called cloud-only groups), or automatically via the [Microsoft Entra dynamic group feature](../enterprise-users/groups-create-rule.md).
+
+- When a product license is assigned to a group, the administrator can disable one or more service plans in the product. Typically, this assignment is done when the organization is not yet ready to start using a service included in a product. For example, the administrator might assign Microsoft 365 to a department, but temporarily disable the Yammer service.
+
+- All Microsoft cloud services that require user-level licensing are supported. This support includes all Microsoft 365 products, Enterprise Mobility + Security, and Dynamics 365.
+
+- Group-based licensing is currently available through the [Azure portal](https://portal.azure.com) and through the [Microsoft Admin center](https://admin.microsoft.com/).
+
+- Microsoft Entra ID automatically manages license modifications that result from group membership changes. Typically, license modifications are effective within minutes of a membership change.
+
+- A user can be a member of multiple groups with license policies specified. A user can also have some licenses that were directly assigned, outside of any groups. The resulting user state is a combination of all assigned product and service licenses. If a user is assigned same license from multiple sources, the license will be consumed only once.
+
+- In some cases, licenses can't be assigned to a user. For example, there might not be enough available licenses in the tenant, or conflicting services might have been assigned at the same time. Administrators have access to information about users for whom Microsoft Entra ID couldn't fully process group licenses. They can then take corrective action based on that information.
+
+## Your feedback is welcome!
+
+If you have feedback or feature requests, share them with us using [the Microsoft Entra admin forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
+
+## Next steps
+
+To learn more about other scenarios for license management through group-based licensing, see:
+
+* [Assigning licenses to a group in Microsoft Entra ID](../enterprise-users/licensing-groups-assign.md)
+* [Identifying and resolving license problems for a group in Microsoft Entra ID](../enterprise-users/licensing-groups-resolve-problems.md)
+* [How to migrate individual licensed users to group-based licensing in Microsoft Entra ID](../enterprise-users/licensing-groups-migrate-users.md)
+* [How to migrate users between product licenses using group-based licensing in Microsoft Entra ID](../enterprise-users/licensing-groups-change-licenses.md)
+* [Microsoft Entra group-based licensing additional scenarios](../enterprise-users/licensing-group-advanced.md)
+* [PowerShell examples for group-based licensing in Microsoft Entra ID](../enterprise-users/licensing-ps-examples.md)
active-directory How To Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md
Title: How to manage groups
-description: Instructions about how to manage Microsoft Entra groups and group membership.
+description: Instructions about how to create and update Microsoft Entra groups, such as membership and settings.
Last updated 09/12/2023 +
+# Customer Intent: As an IT admin, I want to learn how to create groups, add members, and adjust setting so that I can grant the right access to the right services for the right people.
+ # Manage Microsoft Entra groups and group membership
To create a basic group and add members:
1. Enter a **Group name.** Choose a name that you'll remember and that makes sense for the group. A check will be performed to determine if the name is already in use. If the name is already in use, you'll be asked to change the name of your group.
+ - The name of the group can't start with a space. Starting the name with a space prevents the group from appearing as an option for steps such as adding role assignments to group members.
+ 1. **Group email address**: Only available for Microsoft 365 group types. Enter an email address manually or use the email address built from the Group name you provided. 1. **Group description.** Add an optional description to your group.
You can remove an existing Security group from another Security group; however,
You can delete a group for any number of reasons, but typically it will be because you: -- Chose the incorrect **Group type** option.
+- Choose the incorrect **Group type** option.
- Created a duplicate group by mistake. - No longer need the group.
active-directory How To Rename Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-rename-azure-ad.md
- Last updated 09/27/2023
# Customer intent: As a content creator, employee of an organization with internal documentation for IT or identity security admins, developer of Azure AD-enabled apps, ISV, or Microsoft partner, I want to learn how to correctly update our documentation or content to use the new name for Azure AD. + # How to: Rename Azure AD Azure Active Directory (Azure AD) is being renamed to Microsoft Entra ID to better communicate the multicloud, multiplatform functionality of the product and unify the naming of the Microsoft Entra product family.
This article provides best practices and support for customers and organizations
## Prerequisites
-Before changing instances of Azure AD in your documentation or content, familiarize yourself with the guidance in [New name for Azure AD](new-name.md) to:
+Before changing instances of Azure AD in your documentation or content, familiarize yourself with the guidance in [New name for Azure AD](./new-name.md) to:
- Understand the product name and why we made the change - Download the new product icon
Update your organization's content and experiences using the relevant tools.
Use the following criteria to determine what change(s) you need to make to instances of `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`. 1. If the text string is found in the naming dictionary of previous terms, change it to the new term.
-1. If a punctuation mark follows "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD," replace with 'Microsoft Entra ID' because that's the product name.
-1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` is followed by `for`, `Premium`, `Plan`, `P1`, or `P2`, replace with `Microsoft Entra ID` because it refers to a SKU name or Service Plan.
+1. If a punctuation mark follows `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD`, replace with `Microsoft Entra ID` because that's the product name.
+1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` is followed by `for`, `Premium`, `Plan`, `P1`, or `P2`, replace with `Microsoft Entra ID` because it refers to a SKU name or Service Plan.
1. If an article (`a`, `an`, `the`) or possessive (`your`, `your organization's`) precedes (`Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`), then replace with `Microsoft Entra` because it's a feature name. For example: 1. "an Azure AD tenant" becomes "a Microsoft Entra tenant" 1. "your organization's Azure AD tenant" becomes "your Microsoft Entra tenant"
-1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` is followed by an adjective or noun not in the previous steps, then replace with `Microsoft Entra` because it's a feature name. For example,"Azure AD Conditional Access" becomes "Microsoft Entra Conditional Access," while "Azure AD tenant" becomes "Microsoft Entra tenant."
-1. Otherwise, replace `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with `Microsoft Entra ID`
+1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` is followed by an adjective or noun not in the previous steps, then replace with `Microsoft Entra` because it's a feature name. For example, `Azure AD Conditional Access` becomes `Microsoft Entra Conditional Access`, while `Azure AD tenant` becomes `Microsoft Entra tenant`.
+1. Otherwise, replace `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` with `Microsoft Entra ID`.
See the section [Glossary of updated terminology](new-name.md#glossary-of-updated-terminology) to further refine your custom logic. ### Update graphics and icons 1. Replace the Azure AD icon with the Microsoft Entra ID icon.
-1. Replace titles or text containing `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with `Microsoft Entra ID`.
+1. Replace titles or text containing `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, or `AAD` with `Microsoft Entra ID`.
## Sample PowerShell script You can use following PowerShell script as a baseline to rename Azure AD references in your documentation or content. This code sample: -- Scans .resx files within a specified folder and all nested folders.
+- Scans `.resx` files within a specified folder and all nested folders.
- Edits files by replacing any references to `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with the correct terminology according to [New name for Azure AD](new-name.md). Edit the baseline script according to your needs and the scope of files you need to update. You may need to account for edge cases and modify the script according to how you've defined the messages in your source files. The script is not fully automated. If you use the script as-is, you must review the outputs and may need to make additional adjustments to follow the guidance in [New name for Azure AD](new-name.md).
$terminology = @(
@{ Key = 'Azure AD seamless single sign-on'; Value = 'Microsoft Entra seamless single sign-on' }, @{ Key = 'Azure AD self-service password reset'; Value = 'Microsoft Entra self-service password reset' }, @{ Key = 'Azure AD SSPR'; Value = 'Microsoft Entra SSPR' },
- @{ Key = 'Azure AD SSPR'; Value = 'Microsoft Entra SSPR' },
@{ Key = 'Azure AD domain'; Value = 'Microsoft Entra domain' }, @{ Key = 'Azure AD group'; Value = 'Microsoft Entra group' }, @{ Key = 'Azure AD login'; Value = 'Microsoft Entra login' },
$postTransforms = @(
@{ Key = ' an ME-ID'; Value = ' a ME-ID' } @{ Key = '>An ME-ID'; Value = '>A ME-ID' } @{ Key = 'Microsoft Entra ID administration portal'; Value = 'Microsoft Entra administration portal' }
- @{ Key = 'Microsoft Entra IDvanced Threat'; Value = 'Azure Advanced Threat' }
+ @{ Key = 'Microsoft Entra ID Advanced Threat'; Value = 'Azure Advanced Threat' }
@{ Key = 'Entra ID hybrid join'; Value = 'Entra hybrid join' } @{ Key = 'Microsoft Entra ID join'; Value = 'Microsoft Entra join' } @{ Key = 'ME-ID join'; Value = 'Microsoft Entra join' } @{ Key = 'Microsoft Entra ID service principal'; Value = 'Microsoft Entra service principal' }
- @{ Key = 'DownloMicrosoft Entra Connector'; Value = 'Download connector' }
+ @{ Key = 'Download Microsoft Entra Connector'; Value = 'Download connector' }
@{ Key = 'Microsoft Microsoft'; Value = 'Microsoft' } )
$postTransforms = @(
$terminology = $terminology.GetEnumerator() | Sort-Object -Property { $_.Key.Length } -Descending $postTransforms = $postTransforms.GetEnumerator() | Sort-Object -Property { $_.Key.Length } -Descending
-# Get all resx and resjson files in the current directory and its subdirectories, ignoring .gitignored files.
-Write-Host "Getting all resx and resjson files in the current directory and its subdirectories, ignoring .gitignored files."
+# Get all resx files in the current directory and its subdirectories, ignoring .gitignored files.
+Write-Host "Getting all resx files in the current directory and its subdirectories, ignoring .gitignored files."
$gitIgnoreFiles = Get-ChildItem -Path . -Filter .gitignore -Recurse
-$targetFiles = Get-ChildItem -Path . -Include *.resx, *.resjson -Recurse
+$targetFiles = Get-ChildItem -Path . -Include *.resx -Recurse
$filteredFiles = @() foreach ($file in $targetFiles) {
foreach ($file in $targetFiles) {
$scriptPath = $MyInvocation.MyCommand.Path $filteredFiles = $filteredFiles | Where-Object { $_.FullName -ne $scriptPath }
-# This command will get all the files with the extensions .resx and .resjson in the current directory and its subdirectories, and then filter out those that match the patterns in the .gitignore file. The Resolve-Path cmdlet will find the full path of the .gitignore file, and the Get-Content cmdlet will read its content as a single string. The -notmatch operator will compare the full name of each file with the .gitignore content using regular expressions, and return only those that do not match.
+# This command will get all the files with the extensions .resx in the current directory and its subdirectories, and then filter out those that match the patterns in the .gitignore file. The Resolve-Path cmdlet will find the full path of the .gitignore file, and the Get-Content cmdlet will read its content as a single string. The -notmatch operator will compare the full name of each file with the .gitignore content using regular expressions, and return only those that do not match.
Write-Host "Found $($filteredFiles.Count) files." function Update-Terminology {
To help your customers with the transition, it's helpful to add a note: "Azure A
## Next steps -- [Stay up-to-date with what's new in Azure AD/Microsoft Entra ID](whats-new.md)
+- [Stay up-to-date with what's new in Microsoft Entra ID (formerly Azure AD)](./whats-new.md)
- [Get started using Microsoft Entra ID at the Microsoft Entra admin center](https://entra.microsoft.com/)-- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
+- [Learn more about Microsoft Entra ID with content from Microsoft Learn](/entra)
+
+<!-- docutune:ignore "Azure Active Directory" "Azure AD" "AAD" -->
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
-+ Last updated 09/27/2023
# New name for Azure Active Directory
-To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, the new name for Azure Active Directory (Azure AD) is Microsoft Entra ID.
+To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, the new name for Azure Active Directory (Azure AD) is Microsoft Entra ID.
## No interruptions to usage or service
The Microsoft Entra ID name more accurately represents the multicloud and multip
### What is Microsoft Entra?
-Microsoft Entra helps you protect all identities and secure network access everywhere. The expanded product family includes:
+The Microsoft Entra product family helps you protect all identities and secure network access everywhere. The expanded product family includes:
| Identity and access management | New identity categories | Network access | ||||
There are no changes to the identity features and functionality available in Mic
### What's changing for Microsoft 365 E5?
-In addition to the capabilities they already have, Microsoft 365 E5 customers also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra P2, currently known as Azure AD Premium P2.
+In addition to the capabilities they already have, Microsoft 365 E5 customers also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra ID P2, currently known as Azure AD Premium P2.
### What's changing for identity developer and devops experience?
Only official product names are capitalized, plus Conditional Access and My * ap
## Next steps - [How to: Rename Azure AD](how-to-rename-azure-ad.md)-- [Stay up-to-date with what's new in Azure AD/Microsoft Entra ID](whats-new.md)
+- [Stay up-to-date with what's new in Microsoft Entra ID (formerly Azure AD)](./whats-new.md)
- [Get started using Microsoft Entra ID at the Microsoft Entra admin center](https://entra.microsoft.com/)-- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
+- [Learn more about the Microsoft Entra family with content from Microsoft Learn](/entra)
+
+<!-- docutune:ignore "Azure Active Directory" "Azure AD" "AAD" "Entra ID" "Cloud Knox" "Identity Governance" -->
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## September 2023
+
+### Public Preview - Managing and Changing Passwords in My Security Info
+
+**Type:** New feature
+**Service category:** My Profile/Account
+**Product capability:** End User Experiences
+
+The My Security Info management portal ([My Sign-Ins | Security Info | Microsoft.com](https://mysignins.microsoft.com/security-info)) will now support an improved end user experience of managing passwords. Users are able to change their password, and users capable of multifactor authentication (MFA) are able to update their passwords without providing their current password.
+++
+### Public Preview - Device-bound passkeys as an authentication method
+
+**Type:** Changed feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Beginning January 2024, Microsoft Entra ID will support [device-bound passkeys](https://passkeys.dev/docs/reference/terms/#device-bound-passkey) stored on computers and mobile devices as an authentication method in preview, in addition to the existing support for FIDO2 security keys. This enables your users to perform phishing-resistant authentication using the devices that they already have.  
++
+We'll expand the existing FIDO2 authentication methods policy and end user registration experience to support this preview release. If your organization requires or prefers FIDO2 authentication using physical security keys only, then please enforce key restrictions to only allow security key models that you accept in your FIDO2 policy. Otherwise, the new preview capabilities enable your users to register for device-bound passkeys stored on Windows, macOS, iOS, and Android. Learn more about FIDO2 key restrictions [here](../authentication/howto-authentication-passwordless-security-key.md).
+++
+### General Availability - Authenticator on Android is FIPS 140 compliant
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Authenticator version and higher on Android version will be FIPS 140 compliant for all Azure AD authentications using push multi-factor authentications (MFA), Passwordless Phone Sign-In (PSI), and time-based one-time passcodes (TOTP). No changes in configuration are required in the Authenticator app or Azure portal to enable this capability. For more information, see: [Authentication methods in Microsoft Entra ID - Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md).
+++
+### General Availability - Recovery of deleted application and service principals is now available
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Identity Lifecycle Management
+
+With this release, you can now recover applications along with their original service principals, eliminating the need for extensive reconfiguration and code changes ([Learn more](../manage-apps/delete-recover-faq.yml)). It significantly improves the application recovery story and addresses a long-standing customer need. This change is beneficial to you on:
+
+- **Faster Recovery**: You can now recover their systems in a fraction of the time it used to take, reducing downtime and minimizing disruptions.
+- **Cost Savings**: With quicker recovery, you can save on operational costs associated with extended outages and labor-intensive recovery efforts.
+- **Preserved Data**: Previously lost data, such as SMAL configurations, is now retained, ensuring a smoother transition back to normal operations.
+- **Improved User Experience**: Faster recovery times translate to improved user experience and customer satisfaction, as applications are back up and running swiftly.
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - September 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Datadog](../saas-apps/datadog-provisioning-tutorial.md)
+- [Litmos](../saas-apps/litmos-provisioning-tutorial.md)
+- [Postman](../saas-apps/postman-provisioning-tutorial.md)
+- [Recnice](../saas-apps/recnice-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++
+### General Availability - Web Sign-In for Windows
+
+**Type:** Changed feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+We're thrilled to announce that as part of the Windows 11 September moment, we're releasing a new Web Sign-In experience that will expand the number of supported scenarios and greatly improve security, reliability, performance, and overall end-to-end experience for our users.
+
+Web Sign-In (WSI) is a credential provider on the Windows lock/sign-in screen for AADJ joined devices that provide a web experience used for authentication and returns an auth token back to the operating system to allow the user to unlock/sign-in to the machine.
+
+Web Sign-In was initially intended to be used for a wide range of auth credential scenarios; however, it was only previously released for limited scenarios such as: [Simplified EDU Web Sign-In](/education/windows/federated-sign-in?tabs=intune) and recovery flows via [Temporary Access Password (TAP)](../authentication/howto-authentication-temporary-access-pass.md).
+
+The underlying provider for Web Sign-In has been re-written from the ground up with security and improved performance in mind. This release moves the Web Sign-in infrastructure from the Cloud Host Experience (CHX) WebApp to a newly written Login Web Host (LWH) for the September moment. This release provides better security and reliability to support previous EDU & TAP experiences and new workflows enabling using various Auth Methods to unlock/login to the desktop.
+++
+### General Availability - Support for Microsoft admin portals in Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+When a Conditional Access policy targets the Microsoft Admin Portals cloud app, the policy is enforced for tokens issued to application IDs of the following Microsoft administrative portals:
+
+- Azure portal
+- Exchange admin center
+- Microsoft 365 admin center
+- Microsoft 365 Defender portal
+- Microsoft Entra admin center
+- Microsoft Intune admin center
+- Microsoft Purview compliance portal
+
+For more information, see: [Microsoft Admin Portals (preview)](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-admin-portals-preview).
+++ ## August 2023 ### General Availability - Tenant Restrictions V2
For more information, see: [Require an app protection policy on Windows devices
In July 2023 we've added the following 10 new applications in our App gallery with Federation support:
-[Gainsight SAML](../saas-apps/gainsight-saml-tutorial.md), [Dataddo](https://www.dataddo.com/), [Puzzel](https://www.puzzel.com/), [Worthix App](../saas-apps/worthix-app-tutorial.md), [iOps360 IdConnect](https://iops360.com/iops360-id-connect-azuread-single-sign-on/), [Airbase](../saas-apps/airbase-tutorial.md), [Couchbase Capella - SSO](../saas-apps/couchbase-capella-sso-tutorial.md), [SSO for Jama Connect®](../saas-apps/sso-for-jama-connect-tutorial.md), [mediment (メディメント)](https://mediment.jp/), [Netskope Cloud Exchange Administration Console](../saas-apps/netskope-cloud-exchange-administration-console-tutorial.md), [Uber](../saas-apps/uber-tutorial.md), [Plenda](https://app.plenda.nl/), [Deem Mobile](../saas-apps/deem-mobile-tutorial.md), [40SEAS](https://www.40seas.com/), [Vivantio](https://www.vivantio.com/), [AppTweak](https://www.apptweak.com/), [ioTORQ EMIS](https://www.iotorq.com/), [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-tutorial.md), [OptiTurn](../saas-apps/optiturn-tutorial.md), [Application Experience with Mist](https://www.mist.com/), [クラウド勤怠管理システムKING OF TIME](../saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md), [Connect1](../saas-apps/connect1-tutorial.md), [DB Education Portal for Schools](../saas-apps/db-education-portal-for-schools-tutorial.md), [SURFconext](../saas-apps/surfconext-tutorial.md), [Chengliye Smart SMS Platform](../saas-apps/chengliye-smart-sms-platform-tutorial.md), [CivicEye SSO](../saas-apps/civic-eye-sso-tutorial.md), [Colloquial](../saas-apps/colloquial-tutorial.md), [BigPanda](../saas-apps/bigpanda-tutorial.md), [Foreman](https://foreman.mn/)
+[Gainsight SAML](../saas-apps/gainsight-saml-tutorial.md), [Dataddo](https://www.dataddo.com/), [Puzzel](https://www.puzzel.com/), [Worthix App](../saas-apps/worthix-app-tutorial.md), [iOps360 IdConnect](https://iops360.com/iops360-id-connect-azuread-single-sign-on/), [Airbase](../saas-apps/airbase-tutorial.md), [Couchbase Capella - SSO](../saas-apps/couchbase-capella-sso-tutorial.md), [SSO for Jama Connect®](../saas-apps/sso-for-jama-connect-tutorial.md), [mediment (メディメント)](https://mediment.jp/), [Netskope Cloud Exchange Administration Console](../saas-apps/netskope-cloud-exchange-administration-console-tutorial.md), [Uber](../saas-apps/uber-tutorial.md), [Plenda](https://app.plenda.nl/), [Deem Mobile](../saas-apps/deem-mobile-tutorial.md), [40SEAS](https://www.40seas.com/), [Vivantio](https://www.vivantio.com/), [AppTweak](https://www.apptweak.com/), [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-tutorial.md), [OptiTurn](../saas-apps/optiturn-tutorial.md), [Application Experience with Mist](https://www.mist.com/), [クラウド勤怠管理システムKING OF TIME](../saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md), [Connect1](../saas-apps/connect1-tutorial.md), [DB Education Portal for Schools](../saas-apps/db-education-portal-for-schools-tutorial.md), [SURFconext](../saas-apps/surfconext-tutorial.md), [Chengliye Smart SMS Platform](../saas-apps/chengliye-smart-sms-platform-tutorial.md), [CivicEye SSO](../saas-apps/civic-eye-sso-tutorial.md), [Colloquial](../saas-apps/colloquial-tutorial.md), [BigPanda](../saas-apps/bigpanda-tutorial.md), [Foreman](https://foreman.mn/)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use entitlement management and assign users to access packages, you must have
## View assignments programmatically ### View assignments with Microsoft Graph
-You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve assignments across all catalogs.
+You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve assignments across all catalogs.
+
+Microsoft Graph will return the results in pages, and will continue to return a reference to the next page of results in the `@odata.nextLink` property with each response, until all pages of the results have been read. To read all results, you must continue to call Microsoft Graph with the `@odata.nextLink` property returned in each response until the `@odata.nextLink` property is no longer returned, as described in [paging Microsoft Graph data in your app](/graph/paging).
+
+While an identity governance administrator can retrieve access packages from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`.
### View assignments with PowerShell
-You can perform this query in PowerShell with the `Get-MgEntitlementManagementAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet.
+You can also retrieve assignments to an access package in PowerShell with the `Get-MgEntitlementManagementAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0 to retrieve all assignments to a particular access package. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet. Be sure when using the `Get-MgEntitlementManagementAccessPackage` cmdlet to include the `-All` flag to cause all pages of assignments to be returned.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.Read.All" $accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayName eq 'Marketing Campaign'"
+if ($null -eq $accesspackage) { throw "no access package"}
$assignments = @(Get-MgEntitlementManagementAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop) $assignments | ft Id,state,{$_.Target.id},{$_.Target.displayName} ```
+Note that the preceding query will return expired and delivering assignments as well as delivered assignments. If you wish to exclude expired or delivering assignments, you can use a filter that includes the access package ID as well as the state of the assignments. This script illustrates using a filter to retrieve only the assignments in state `Delivered` for a particular access package. The script will then generate a CSV file `assignments.csv`, with one row per assignment.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.Read.All"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayName eq 'Marketing Campaign'"
+if ($null -eq $accesspackage) { throw "no access package"}
+$accesspackageId = $accesspackage.Id
+$filter = "accessPackage/id eq '" + $accesspackageId + "' and state eq 'Delivered'"
+$assignments = @(Get-MgEntitlementManagementAssignment -Filter $filter -ExpandProperty target -All -ErrorAction Stop)
+$sp = $assignments | select-object -Property Id,{$_.Target.id},{$_.Target.ObjectId},{$_.Target.DisplayName},{$_.Target.PrincipalName}
+$sp | Export-Csv -Encoding UTF8 -NoTypeInformation -Path ".\assignments.csv"
+```
++ ## Directly assign a user In some cases, you might want to directly assign specific users to an access package so that users don't have to go through the process of requesting the access package. To directly assign users, the access package must have a policy that allows administrator direct assignments.
You can assign a user to an access package in PowerShell with the `New-MgEntitle
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty assignmentpolicies
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentpolicies"
+if ($null -eq $accesspackage) { throw "no access package"}
$policy = $accesspackage.AssignmentPolicies[0] $userid = "cdbdf152-82ce-479c-b5b8-df90f561d5c7" $params = @{
Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"
$members = @(Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15" -All) $accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies"
+if ($null -eq $accesspackage) { throw "no access package"}
$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgBetaEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members ```
If you wish to add an assignment for a user who is not yet in your directory, yo
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All" $accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies"
+if ($null -eq $accesspackage) { throw "no access package"}
$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgBetaEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" ```
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
If you have a set of users whose requests are in the "Partially Delivered" or "F
### View requests with Microsoft Graph You can also retrieve requests for an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignmentRequests](/graph/api/entitlementmanagement-list-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access package requests from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$expand=accessPackage&$filter=accessPackage/id eq '9bbe5f7d-f1e7-4eb1-a586-38cdf6f8b1ea'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve requests across all catalogs.
+Microsoft Graph will return the results in pages, and will continue to return a reference to the next page of results in the `@odata.nextLink` property with each response, until all pages of the results have been read. To read all results, you must continue to call Microsoft Graph with the `@odata.nextLink` property returned in each response until the `@odata.nextLink` property is no longer returned, as described in [paging Microsoft Graph data in your app](/graph/paging).
+ ## Remove request (Preview) You can also remove a completed request that is no longer needed. To remove a request:
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
You can also add a resource to a catalog in PowerShell with the `New-MgEntitleme
Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Group.ReadWrite.All" $g = Get-MgGroup -Filter "displayName eq 'Marketing'"
+if ($null -eq $g) {throw "no group" }
$catalog = Get-MgEntitlementManagementCatalog -Filter "displayName eq 'Marketing'"
+if ($null -eq $catalog) { throw "no catalog" }
$params = @{ requestType = "adminAdd" resource = @{
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
> [!NOTE] > Manually setting the employeeLeaveDateTime for cloud-only users requires special permissions. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
-This document explains how to set up synchronization from on-premises Microsoft Entra Connect cloud sync and Microsoft Entra Connect for the required attributes.
+This document explains how to set up synchronization from on-premises Microsoft Entra Connect cloud sync or Microsoft Entra Connect for the required attributes.
>[!NOTE]
-> There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're importing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string.
+> There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're synchronizing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string.
## Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting
To update this mapping, you'd do the following:
1. Add your source attribute(s) created as Type String, and select on the CheckBox for required. :::image type="content" source="media/how-to-lifecycle-workflow-sync-attributes/edit-attribute-list.png" alt-text="Screenshot of source api list."::: > [!NOTE]
- > The number, and name, of source attributes added will depend on which attributes you are syncing.
+ > The number, and name, of source attributes added will depend on which attributes you are syncing from Active Directory.
1. Select Save. 1. From there you must map the HRM attributes to the added Active Directory attributes. To do this, Add New Mapping using an Expression.
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
As an admin, the following options exist for you to determine how users consent
- Disable user consent. For example, a high school may want to turn off user consent so that the school IT administration has full control over all the applications that are used in their tenant. - Allow users to consent to the required permissions. It's NOT recommended to keep user consent open if you have sensitive data in your tenant. - If you still want to retain admin-only consent for certain permissions but want to assist your end-users in onboarding their application, you can use the admin consent workflow to evaluate and respond to admin consent requests. This way, you can have a queue of all the requests for admin consent for your tenant and can track and respond to them directly through the Microsoft Entra admin center.
-To learn how to configure the admin consent workflow, see [configure-admin-consent-workflow.md](configure-admin-consent-workflow.md).
+To learn how to configure the admin consent workflow, see [Configure the admin consent workflow](configure-admin-consent-workflow.md).
## How the admin consent workflow works
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Create a Linux virtual machine with a user assigned managed identity specified.
```powershell New-AzVm ` -Name "<Linux VM name>" `
- -image CentOS
+ -image CentOS85Gen2
-ResourceGroupName "<Your resource group>" ` -Location "East US" ` -VirtualNetworkName "myVnet" `
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
Microsoft Entra role-assignable group feature is not part of Microsoft Entra Pri
## Relationship between role-assignable groups and PIM for Groups
-Groups can be role-assignable or non-role-assignable. The group can be enabled in PIM for Groups or not enabled in PIM for Groups. These are independent properties of the group. Any Microsoft Entra security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group doesn't have to be role-assignable group to be enabled in PIM for Groups.
+Groups in Azure AD can be classified as either role-assignable or non-role-assignable. Additionally, any group can be enabled or not enabled for use with Azure AD Privileged Identity Management (PIM) for Groups. These are independent properties of the group. Any Microsoft Entra security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group doesn't have to be role-assignable group to be enabled in PIM for Groups.
If you want to assign a Microsoft Entra role to a group, it has to be role-assignable. Even if you don't intend to assign a Microsoft Entra role to the group but the group provides access to sensitive resources, it is still recommended to consider creating the group as role-assignable. This is because of extra protections role-assignable groups have ΓÇô see [ΓÇ£What are Microsoft Entra role-assignable groups?ΓÇ¥](#what-are-entra-id-role-assignable-groups) in the section above.
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
- Title: Logs available for streaming to endpoints from Microsoft Entra ID
+ Title: Logs available for streaming from Microsoft Entra ID
description: Learn about the Microsoft Entra logs available for streaming to an endpoint for storage, analysis, or monitoring.
Previously updated : 08/09/2023 Last updated : 09/28/2023 -+
+# Customer Intent: As an IT admin, I want to know what logs are available for streaming to an endpoint from Microsoft Entra ID so that I can choose the best option for my organization.
-# Learn about the identity logs you can stream to an endpoint
+# What are the identity logs you can stream to an endpoint?
-Using Diagnostic settings in Microsoft Entra ID, you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
+Using Microsoft Entra diagnostic settings, you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
-This article describes the logs that you can route to an endpoint from Microsoft Entra Diagnostic settings.
+This article describes the logs that you can route to an endpoint with Microsoft Entra diagnostic settings.
-## Prerequisites
+## Log streaming requirements and options
-Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Microsoft Entra tenant.
+Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Microsoft Entra tenant.
-To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles.
+To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles:
- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md) - [Archive logs to a storage account](howto-archive-logs-to-storage-account.md)
To help decide which log routing option is best for you, see [How to access acti
## Activity log options
-The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal.
+The following logs can be routed to an endpoint for storage, analysis, or monitoring.
### Audit logs
The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you
### Microsoft Graph activity logs
-The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in private preview. The logs are visible in Microsoft Entra ID, but selecting these options won't add new logs to your workspace unless your organization was included in the private preview.
+The `MicrosoftGraphActivityLogs` provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs.
+
+The feature is currently in public preview. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
### Network access traffic logs
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Previously updated : 08/31/2023 Last updated : 09/26/2023 -+ # What are Microsoft Entra sign-in logs?
In addition to the default fields, the interactive sign-in log also shows:
**Non-interactive sign-ins on the interactive sign-in logs**
-Previously, some non-interactive sign-ins from Microsoft Exchange clients were included in the interactive user sign-in log for better visibility. This increased visibility was necessary before the non-interactive user sign-in logs were introduced in November 2020. However, it's important to note that some non-interactive sign-ins, such as those using FIDO2 keys, may still be marked as interactive due to the way the system was set up before the separate non-interactive logs were introduced. These sign-ins may display interactive details like client credential type and browser information, even though they are technically non-interactive sign-ins.
+Previously, some non-interactive sign-ins from Microsoft Exchange clients were included in the interactive user sign-in log for better visibility. This increased visibility was necessary before the non-interactive user sign-in logs were introduced in November 2020. However, it's important to note that some non-interactive sign-ins, such as those using FIDO2 keys, may still be marked as interactive due to the way the system was set up before the separate non-interactive logs were introduced. These sign-ins may display interactive details like client credential type and browser information, even though they're technically non-interactive sign-ins.
**Passthrough sign-ins**
-Microsoft Entra ID issues tokens for authentication and authorization. In some situations, a user who is signed in to the Contoso tenant may try to access resources in the Fabrikam tenant, where they don't have access. A no-authorization token, called a passthrough token, is issued to the Fabrikam tenant. The passthrough token doesn't allow the user to access any resources.
+Microsoft Entra ID issues tokens for authentication and authorization. In some situations, a user who is signed in to the Contoso tenant may try to access resources in the Fabrikam tenant, where they don't have access. A no-authorization token called a passthrough token, is issued to the Fabrikam tenant. The passthrough token doesn't allow the user to access any resources.
When reviewing the logs for this situation, the sign-in logs for the home tenant (in this scenario, Contoso) don't show a sign-in attempt because the token wasn't evaluated against the home tenant's policies. The sign-in token was only used to display the appropriate failure message. You won't see a sign-in attempt in the logs for the home tenant.
+**First-party, app-only service principal sign-ins**
+
+The service principal sign-in logs don't include first-party, app-only sign-in activity. This type of activity happens when first-party apps get tokens for an internal Microsoft job where there's no direction or context from a user. We exclude these logs so you're not paying for logs related to internal Microsoft tokens within your tenant.
+
+You may identify Microsoft Graph events that don't correlate to a service principal sign-in if you're routing `MicrosoftGraphActivityLogs` with `SignInLogs` to the same Log Analytics workspace. This integration allows you to cross reference the token issued by the Microsoft Graph activity with the sign-in. The `UniqueTokenIdentifier` in the Microsoft Graph activity logs would be missing from the service principal sign-in logs.
+ ### Non-interactive user sign-ins
-Non-interactive sign-ins are done *on behalf of a* user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, Microsoft Entra ID recognizes when the user's token needs to be refreshed and does so behind the scenes, without interrupting the user's session. In general, the user perceives these sign-ins as happening in the background.
+Non-interactive sign-ins are done *on behalf of a* user. These delegated sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, Microsoft Entra ID recognizes when the user's token needs to be refreshed and does so behind the scenes, without interrupting the user's session. In general, the user perceives these sign-ins as happening in the background.
![Screenshot of the non-interactive user sign-ins log.](media/concept-sign-ins/sign-in-logs-user-noninteractive.png)
To make it easier to digest the data, non-interactive sign-in events are grouped
:::image type="content" source="media/concept-sign-ins/aggregate-sign-in.png" alt-text="Screenshot of an aggregate sign-in expanded to show all rows." lightbox="media/concept-sign-ins/aggregate-sign-in-expanded.png":::
-When Microsoft Entra ID logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) has a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
+When Microsoft Entra ID logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) has a value greater than one in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
Sign-ins are aggregated in the non-interactive users when the following data matches:
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
Previously updated : 08/24/2023 Last updated : 09/28/2023 -+ # How To: Manage inactive user accounts
The following details relate to the `lastSignInDateTime` property.
- The last attempted sign-in of a user took place before April 2020. - The affected user account was never used for a sign-in attempt. -- The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
+- The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user. It may take up to 24 hours to update.
## How to investigate a single user
If you need to view the latest sign-in activity for a user, you can view the use
![Screenshot of the user overview page with the sign-in activity tile highlighted.](media/howto-manage-inactive-user-accounts/last-sign-activity-tile.png)
-The last sign-in date and time shown on this tile may take up to 6 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
+The last sign-in date and time shown on this tile may take up to 24 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Microsoft Entra built-in roles you can assign to allow ma
> | [Teams Communications Support Specialist](#teams-communications-support-specialist) | Can troubleshoot communications issues within Teams using basic tools. | fcf91098-03e3-41a9-b5ba-6f0ec8188a12 | > | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 | > | [Tenant Creator](#tenant-creator) | Create new Microsoft Entra or Azure AD B2C tenants. | 112ca1a2-15ad-4102-995e-45b0bc479a6a |
-> | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e |
+> | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Read Usage reports and Adoption Score, but can't access user details. | 75934031-6c7e-415a-99d7-48dbd49e875e |
> | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins.<br/>[![Privileged label icon.](./medi) | fe930be7-5e62-47db-91af-98c3a49a38b1 | > | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 | > | [Viva Goals Administrator](#viva-goals-administrator) | Manage and configure all aspects of Microsoft Viva Goals. | 92b086b3-e367-4ef2-b869-1de128fb986e |
Assign the Tenant Creator role to users who need to do the following tasks:
## Usage Summary Reports Reader
-Users with this role can access tenant level aggregated data and associated insights in Microsoft 365 admin center for Usage and Productivity Score but cannot access any user level details or insights. In Microsoft 365 admin center for the two reports, we differentiate between tenant level aggregated data and user level details. This role gives an extra layer of protection on individual user identifiable data, which was requested by both customers and legal teams.
+Assign the Usage Summary Reports Reader role to users who need to do the following tasks in the Microsoft 365 admin center:
+
+- View the Usage reports and Adoption Score
+- Read organizational insights, but not personally identifiable information (PII) of users
+
+This role only allows users to view organizational-level data with the following exceptions:
+
+- Member users can view user management data and settings.
+- Guest users assigned this role can not view user management data and settings.
> [!div class="mx-tableFixed"] > | Actions | Description |
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Azure Advisor helps you ensure and improve the continuity of your business-criti
1. On the **Advisor** dashboard, select the **Reliability** tab.
-## FarmBeats
+## FarmBeats / Azure Data Manager for Agriculture (ADMA)
### Upgrade to the latest FarmBeats API version
We have identified calls to a FarmBeats API version that is scheduled for deprec
Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-## API Management
+### Upgrade to the latest ADMA Java SDK version
-### Hostname certificate rotation failed
+We have identified calls to an Azure Data Manager for Agriculture (ADMA) Java SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service will not be able to retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+Learn more about [Azure FarmBeats - FarmBeatsJavaSdkVersion (Upgrade to the latest ADMA Java SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+### Upgrade to the latest ADMA DotNet SDK version
+
+We have identified calls to an ADMA DotNet SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsDotNetSdkVersion (Upgrade to the latest ADMA DotNet SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+### Upgrade to the latest ADMA JavaScript SDK version
+
+We have identified calls to an ADMA JavaScript SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsJavaScriptSdkVersion (Upgrade to the latest ADMA JavaScript SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+### Upgrade to the latest ADMA Python SDK version
+
+We have identified calls to an ADMA Python SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsPythonSdkVersion (Upgrade to the latest ADMA Python SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+## API Management
-Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
### SSL/TLS renegotiation blocked
-SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions will return 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions returns 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+
+Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
-Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](../api-management/api-management-howto-mutual-certificates-for-clients.md).
+### Hostname certificate rotation failed
+
+API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service cannot retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
## App
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on yo
### Upgrade the standard disks attached to your premium-capable VM to premium disks
-We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Chec
### Access to mandatory URLs missing for your Azure Virtual Desktop environment
-In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting the "Learn More" link, you will be able to see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
+In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
Learn more about [Azure Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgra
Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions. > [!NOTE]
-> Additional regions will incur extra costs.
+> Additional regions incur extra costs.
Learn more about [Azure Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](../cosmos-db/high-availability.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgra
### Upgrade your Azure Fluid Relay client library
-You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading will provide the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure F
### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting July 1, 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting July 1, 2020, you can't create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka). ### Deprecation of Older Spark Versions in HDInsight Spark cluster
-Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft.
+Starting July 1, 2020, you can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters run as is without support from Microsoft.
Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark). ### Enable critical updates to be applied to your HDInsight clusters
-HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and re
### Apply critical updates to your HDInsight clusters
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
-You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) will be retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 will be deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
+You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quo
### Upgrade your SKU or add more instances to ensure fault tolerance
-Deploying two or more medium or large sized instances will ensure business continuity during outages caused by planned or unplanned maintenance.
+Deploying two or more medium or large sized instances ensures business continuity during outages caused by planned or unplanned maintenance.
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more instances to ensure fault tolerance)](https://aka.ms/aa_gatewayrec_learnmore).
Learn more about [Traffic Manager profile - GeneralProfile (Add at least one mor
### Add an endpoint configured to "All (World)"
-For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles will avoid traffic black holing and guarantee service remains available.
+For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available.
Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5). ### Add or move one endpoint to another Azure region
-All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
+All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region improves overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Imple
### Avoid hostname override to ensure site integrity
-Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
+Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST APIs) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
You appear to have ExpressRoute circuits peered in at least two different locati
Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](../expressroute/about-upgrade-circuit-bandwidth.md).
-### Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule
+### Azure WAF RuleSet CRS 3.1/3.2 has been updated with Log4j 2 vulnerability rule
-In response to log4j2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
+In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
-### Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228)
+### Additional protection to mitigate Log4j 2 vulnerability (CVE-2021-44228)
-To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
+To mitigate the impact of Log4j 2 vulnerability, we recommend these steps:
-1) Upgrade Log4j2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
+1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
Learn more about [Virtual network - natGateway (Use NAT gateway for outbound con
### Enable Active-Active gateways for redundancy
-In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
+In active-active configuration, both instances of the VPN gateway establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is switched over to the other active IPsec tunnel automatically.
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Rest
### You are close to exceeding storage quota of 2GB. Create a Standard search service.
-You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations will stop working when storage quota is exceeded.
+You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity). ### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.
-You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations will stop working when storage quota is exceeded.
+You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity). ### You are close to exceeding your available storage quota. Add additional partitions if you need more storage.
-You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations will no longer work.
+You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
ai-services Cognitive Services And Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-and-machine-learning.md
Last updated 10/28/2021
Azure AI services provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services.
-[Azure AI services](./what-are-ai-services.md) is a group of services, each supporting different, generalized prediction capabilities. The services are divided into different categories to help you find the right service.
-
-|Service category|Purpose|
-|--|--|
-|[Decision](https://azure.microsoft.com/services/cognitive-services/directory/decision/)|Build apps that surface recommendations for informed and efficient decision-making.|
-|[Language](https://azure.microsoft.com/services/cognitive-services/directory/lang/)|Allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want.|
-|[Search](https://azure.microsoft.com/services/cognitive-services/directory/search/)|Add Bing Search APIs to your apps and harness the ability to comb billions of webpages, images, videos, and news with a single API call.|
-|[Speech](https://azure.microsoft.com/services/cognitive-services/directory/speech/)|Convert speech into text and text into natural-sounding speech. Translate from one language to another and enable speaker verification and recognition.|
-|[Vision](https://azure.microsoft.com/services/cognitive-services/directory/vision/)|Recognize, identify, caption, index, and moderate your pictures, videos, and digital ink content.|
+[Azure AI services](./what-are-ai-services.md) is a group of services, each supporting different, generalized prediction capabilities.
Use Azure AI services when you:
The following data categorizes each service by which kind of data it allows or r
The services are used in any application that can make REST API(s) or SDK calls. Examples of applications include web sites, bots, virtual or mixed reality, desktop and mobile applications.
-## How is Azure Cognitive Search related to Azure AI services?
-
-[Azure Cognitive Search](../search/search-what-is-azure-search.md) is a separate cloud search service that optionally uses Azure AI services to add image and natural language processing to indexing workloads. Azure AI services is exposed in Azure Cognitive Search through [built-in skills](../search/cognitive-search-predefined-skills.md) that wrap individual APIs. You can use a free resource for walkthroughs, but plan on creating and attaching a [billable resource](../search/cognitive-search-attach-cognitive-services.md) for larger volumes.
- ## How can you use Azure AI services? Each service provides information about your data. You can combine services together to chain solutions such as converting speech (audio) to text, translating the text into many languages, then using the translated languages to get answers from a knowledge base. While Azure AI services can be used to create intelligent solutions on their own, they can also be combined with traditional machine learning projects to supplement models or accelerate the development process.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
Azure OpenAI Service is powered by a diverse set of models with different capabi
GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
-To request access to GPT-4, Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
- - `gpt-4` - `gpt-4-32k`
You can also use the Whisper model via Azure AI Speech [batch transcription](../
### GPT-4 models
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Availability varies by region. If you don't see GPT-4 in your region, please check back later.
+ These models can only be used with the Chat Completion API. | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 32,768 | September 2021 |
-| `gpt-4` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, Sweden Central, Switzerland North, UK South | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, Sweden Central, Switzerland North, UK South | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A | 32,768 | September 2021 |
+| `gpt-4` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 32,768 | September 2021 |
-<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br>
+<sup>1</sup> Due to high demand, availability is limited in the region<br>
<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
-<sup>3</sup> We are rolling out availability of new regions to customers gradually to ensure a smooth experience. In East US and France Central, customers with existing deployments of GPT-4 can create additional deployments of GPT-4 version 0613. For customers new to GPT-4 on Azure OpenAI, please use one of the other available regions.
### GPT-3.5 models
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Prompt construction can be difficult. In practice, the prompt acts to configure
The service provides users access to several different models. Each model provides a different capability and price point.
-GPT-4 models are the latest available models. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
- The DALL-E models, currently in preview, generate images from text prompts that the user provides. The Whisper models, currently in preview, can be used to transcribe and translate speech to text.
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
In this tutorial, you learn how to:
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
-* Access granted to Azure OpenAI in the desired Azure subscription
+* Access granted to Azure OpenAI in the desired Azure subscription.
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. * <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a> * The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, pandas, tiktoken.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
keywords:
## September 2023
+### GPT-4
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to request access to use GPT-4 and GPT-4-32k. Availability may be limited by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+ ### GPT-3.5 Turbo Instruct Azure OpenAI Service now supports the GPT-3.5 Turbo Instruct model. This model has performance comparable to `text-davinci-003` and is available to use with the Completions API. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whispe
- General availability support for: - Chat Completion API version `2023-05-15`. - GPT-35-Turbo models.
- - GPT-4 model series. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+ - GPT-4 model series.
If you are currently using the `2023-03-15-preview` API, we recommend migrating to the GA `2023-05-15` API. If you are currently using API version `2022-12-01` this API remains GA, but does not include the latest Chat Completion capabilities.
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
To use a Whisper model for batch transcription, you also need to set the `model`
Whisper models via batch transcription are supported in the East US, Southeast Asia, and West Europe regions. ::: zone pivot="rest-api"
-You can make a [Models_ListBaseModels](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/Speech/SpeechToText/preview/v3.2-preview.1) request to get available base models for all locales.
+You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview1/operations/Models_ListBaseModels) request to get available base models for all locales.
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
Make an HTTP GET request as shown in the following example for the `eastus` regi
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
+By default only the 100 oldest base models are returned, so you can use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
+
+```azurecli-interactive
+curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base?skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+``````
+ ::: zone-end ::: zone pivot="speech-cli"
ai-services Migrate V3 1 To V3 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-1-to-v3-2.md
Added token count and token error properties to the `EvaluationProperties` prope
- `tokenInsertionCount2`: The number of recognized tokens by model2 that are insertions. - `tokenSubstitutionCount2`: The number of recognized words by model2 that are substitutions.
-### Model copy
-
-Added the new `"/operations/models/copy/{id}"` operation. Used for copy models scenario.
-
-Added the new `"/models/{id}:copy"` operation. Schema in the new copy operation: `"$ref": "#/definitions/ModelCopyAuthorization"` Deprecated the `"/models/{id}:copyto"` operation. Schema in the deprecated copy operation: `"$ref": "#/definitions/ModelCopy"`
-
-Added the new `"/models:authorizecopy"` operation returns `"$ref": "#/definitions/ModelCopyAuthorization"`. This returned entity can be used in the new `"/models/{id}:copy"` operation.
-
-New entity definitions related to model copy authorization:
-- `ModelCopyAuthorization`-- `ModelCopyAuthorizationDefinition`: The Azure Resource ID of the source speech resource.-
-```json
-"ModelCopyAuthorization": {
- "title": "ModelCopyAuthorization",
- "required": [
- "expirationDateTime",
- "id",
- "sourceResourceId",
- "targetResourceEndpoint",
- "targetResourceId",
- "targetResourceRegion"
- ],
- "type": "object",
- "properties": {
- "targetResourceRegion": {
- "description": "The region (aka location) of the target speech resource (e.g., westus2).",
- "minLength": 1,
- "type": "string"
- },
- "targetResourceId": {
- "description": "The Azure Resource ID of the target speech resource.",
- "minLength": 1,
- "type": "string"
- },
- "targetResourceEndpoint": {
- "description": "The endpoint (base url) of the target resource (with custom domain name when it is used).",
- "minLength": 1,
- "type": "string"
- },
- "sourceResourceId": {
- "description": "The Azure Resource ID of the source speech resource.",
- "minLength": 1,
- "type": "string"
- },
- "expirationDateTime": {
- "format": "date-time",
- "description": "The expiration date of this copy authorization.",
- "type": "string"
- },
- "id": {
- "description": "The ID of this copy authorization.",
- "minLength": 1,
- "type": "string"
- }
- }
-},
-```
-
-```json
-"ModelCopyAuthorizationDefinition": {
- "title": "ModelCopyAuthorizationDefinition",
- "required": [
- "sourceResourceId"
- ],
- "type": "object",
- "properties": {
- "sourceResourceId": {
- "description": "The Azure Resource ID of the source speech resource.",
- "minLength": 1,
- "type": "string"
- }
- }
-},
-```
-
-### CustomModelLinks copy properties
-
-New `copy` property
-copyTo URI: The location to the obsolete model copy action. See operation \"Models_CopyTo\" for more details.
-copy URI: The location to the model copy action. See operation \"Models_Copy\" for more details.
-
-```json
-"CustomModelLinks": {
- "title": "CustomModelLinks",
- "type": "object",
- "properties": {
- "copyTo": {
- "format": "uri",
- "description": "The location to the obsolete model copy action. See operation \"Models_CopyTo\" for more details.",
- "type": "string",
- "readOnly": true
- },
- "copy": {
- "format": "uri",
- "description": "The location to the model copy action. See operation \"Models_Copy\" for more details.",
- "type": "string",
- "readOnly": true
- },
- "files": {
- "format": "uri",
- "description": "The location to get all files of this entity. See operation \"Models_ListFiles\" for more details.",
- "type": "string",
- "readOnly": true
- },
- "manifest": {
- "format": "uri",
- "description": "The location to get a manifest for this model to be used in the on-prem container. See operation \"Models_GetCustomModelManifest\" for more details.",
- "type": "string",
- "readOnly": true
- }
- },
- "readOnly": true
-},
-```
- ## Operation IDs You must update the base path in your code from `/speechtotext/v3.1` to `/speechtotext/v3.2-preview.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base`.
ai-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md
Azure AI Speech is updated on an ongoing basis. To stay up-to-date with recent d
## Recent highlights
+* Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#using-whisper-models) guide.
* [Speech to text REST API version 3.2](./migrate-v3-1-to-v3-2.md) is available in public preview. * Speech SDK 1.32.1 was released in September 2023. * [Real-time diarization](./get-started-stt-diarization.md) is in public preview.
-* Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription.
-* Text to speech [Batch synthesis API](./batch-synthesis.md) is available in public preview.
## Release notes
ai-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-sdk.md
Title: "Document Translation C#/.NET or Python client library"
-description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
+description: Use the Document Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
-- Previously updated : 07/18/2023++ Last updated : 09/28/2023 zone_pivot_groups: programming-languages-document-sdk
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA | | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | Until 1.29 GA | | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA |
-| 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2026 | Until 1.31 GA |
+| 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA |
| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || Until 1.32 GA| *\* Indicates the version is designated for Long Term Support*
aks Vertical Pod Autoscaler Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler-api-reference.md
+
+ Title: Vertical Pod Autoscaler API reference in Azure Kubernetes Service (AKS)
+description: Learn about the Vertical Pod Autoscaler API reference for Azure Kubernetes Service (AKS).
++ Last updated : 09/26/2023++
+# Vertical Pod Autoscaler API reference
+
+This article provides the API reference for the Vertical Pod Autoscaler feature of Azure Kubernetes Service.
+
+This reference is based on version 0.13.0 of the AKS implementation of VPA.
+
+## VerticalPodAutoscaler
+
+|Name |Ojbect |Description |
+|-|-||-|
+|metadata |ObjectMeta | Standard [object metadata][object-metadata-ref].|
+|spec |VerticalPodAutoscalerSpec |The desired behavior of the Vertical Pod Autoscaler.|
+|status |VerticalPodAutoscalerStatus |The most recently observed status of the Vertical Pod Autoscaler. |
+
+## VerticalPodAutoscalerSpec
+
+|Name |Ojbect |Description |
+|-|-||-|
+|targetRef |CrossVersionObjectReference | Reference to the controller managing the set of pods for the autoscaler to control. For example, a Deployment or a StatefulSet. You can point a Vertical Pod Autoscaler at any controller that has a [Scale][scale-ref] subresource. Typically, the Vertical Pod Autoscaler retrieves the pod set from the controller's ScaleStatus. |
+|updatePolicy |PodUpdatePolicy |Specifies whether recommended updates are applied when a pod is started and whether recommended updates are applied during the life of a pod. |
+|resourcePolicy |PodResourcePolicy |Specifies policies for how CPU and memory requests are adjusted for individual containers. The resource policy can be used to set constraints on the recommendations for individual containers. If not specified, the autoscaler computes recommended resources for all containers in the pod, without additional constraints.|
+|recommenders |VerticalPodAutoscalerRecommenderSelector |Recommender is responsible for generating recommendation for the VPA object. Leave empty to use the default recommender. Otherwise the list can contain exactly one entry for a user-provided alternative recommender. |
+
+## VerticalPodAutoscalerList
+
+|Name |Ojbect |Description |
+|-|-||-|
+|metadata |ObjectMeta |Standard [object metadata][object-metadata-ref]. |
+|items |VerticalPodAutoscaler (array) |A list of Vertical Pod Autoscaler objects. |
+
+## PodUpdatePolicy
+
+|Name |Ojbect |Description |
+|-|-||-|
+|updateMode |string |A string that specifies whether recommended updates are applied when a pod is started and whether recommended updates are applied during the life of a pod. Possible values are `Off`, `Initial`, `Recreate`, and `Auto`. The default is `Auto` if you don't specify a value. |
+|minReplicas |int32 |A value representing the minimal number of replicas which need to be alive for Updater to attempt pod eviction (pending other checks like Pod Disruption Budget). Only positive values are allowed. Defaults to global `--min-replicas` flag, which is set to `2`. |
+
+## PodResourcePolicy
+
+|Name |Ojbect |Description |
+|-|-||-|
+|conainerPolicies |ContainerResourcePolicy |An array of resource policies for individual containers. There can be at most one entry for every named container, and optionally a single wildcard entry with `containerName = '*'`, which handles all containers that do not have individual policies. |
+
+## ContainerResourcePolicy
+
+|Name |Ojbect |Description |
+|-|-||-|
+|containerName |string |A string that specifies the name of the container that the policy applies to. If not specified, the policy serves as the default policy. |
+|mode |ContainerScalingMode |Specifies whether recommended updates are applied to the container when it is started and whether recommended updates are applied during the life of the container. Possible values are `Off` and `Auto`. The default is `Auto` if you don't specify a value. |
+|minAllowed |ResourceList |Specifies the minimum CPU request and memory request allowed for the container. By default, there is no minimum applied. |
+|maxAllowed |ResourceList |Specifies the maximum CPU request and memory request allowed for the container. By default, there is no maximum applied. |
+|ControlledResources |[]ResourceName |Specifies the type of recommendations that are computed (and possibly applied) by the Vertical Pod Autoscaler. If empty, the default of [ResourceCPU, ResourceMemory] is used. |
+
+## VerticalPodAutoscalerRecommenderSelector
+
+|Name |Ojbect |Description |
+|-|-||-|
+|name |string |A string that specifies the name of the recommender responsible for generating recommendation for this object. |
+
+## VerticalPodAutoscalerStatus
+
+|Name |Ojbect |Description |
+|-|-||-|
+|recommendation |RecommendedPodResources |The most recently recommended CPU and memory requests. |
+|conditions |VerticalPodAutoscalerCondition | An array that describes the current state of the Vertical Pod Autoscaler. |
+
+## RecommendedPodResources
+
+|Name |Ojbect |Description |
+|-|-||-|
+|containerRecommendation |RecommendedContainerResources |An array of resources recommendations for individual containers. |
+
+## RecommendedContainerResources
+
+|Name |Ojbect |Description |
+|-|-||-|
+|containerName |string| A string that specifies the name of the container that the recommendation applies to. |
+|target |ResourceList |The recommended CPU request and memory request for the container. |
+|lowerBound |ResourceList |The minimum recommended CPU request and memory request for the container. This amount is not guaranteed to be sufficient for the application to be stable. Running with smaller CPU and memory requests is likely to have a significant impact on performance or availability. |
+|upperBound |ResourceList |The maximum recommended CPU request and memory request for the container. CPU and memory requests higher than these values are likely to be wasted. |
+|uncappedTarget |ResourceList |The most recent resource recommendation computed by the autoscaler, based on actual resource usage, not taking into account the **Container Resource Policy**. If actual resource usage causes the target to violate the **Container Resource Policy**, this might be different from the bounded recommendation. This field does not affect actual resource assignment. It is used only as a status indication. |
+
+## VerticalPodAutoscalerCondition
+
+|Name |Ojbect |Description |
+|-|-||-|
+|type |VerticalPodAutoscalerConditionType |The type of condition being described. Possible values are `RecommendationProvided`, `LowConfidence`, `NoPodsMatched`, and `FetchingHistory`. |
+|status |ConditionStatus |The status of the condition. Possible values are `True`, `False`, and `Unknown`. |
+|lastTransitionTime |Time |The last time the condition made a transition from one status to another. |
+|reason |string |The reason for the last transition from one status to another. |
+|message |string |A human-readable string that gives details about the last transition from one status to another. |
+
+## Next steps
+
+See [Vertical Pod Autoscaler][vertical-pod-autoscaler] to understand how to improve cluster resource utilization and free up CPU and memory for other pods.
+
+<!-- EXTERNAL LINKS -->
+[object-metadata-ref]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata
+[scale-ref]: https://v1-25.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#scalespec-v1-autoscaling
+
+<!-- INTERNAL LINKS -->
+[vertical-pod-autoscaler]: vertical-pod-autoscaler.md
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
+ Title: Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster. - Previously updated : 03/17/2023+ Last updated : 09/28/2023
-# Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
+# Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
-This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources.
+This article provides an overview of Vertical Pod Autoscaler (VPA) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA frees up CPU and Memory for the other pods and helps make effective utilization of your AKS cluster.
+
+Vertical Pod autoscaling provides recommendations for resource usage over time. To manage sudden increases in resource usage, use the [Horizontal Pod Autoscaler][horizontal-pod-autoscaling], which scales the number of pod replicas as needed.
## Benefits
Vertical Pod Autoscaler provides the following benefits:
* It analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time.
-* A Pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*.
+* A pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*.
* Set CPU and memory constraints for individual containers by specifying a resource policy
Vertical Pod Autoscaler provides the following benefits:
## Limitations
-* Vertical Pod autoscaling supports a maximum of 500 `VerticalPodAutoscaler` objects per cluster.
-* With this preview release, you can't change the `controlledValue` and `updateMode` of `managedCluster` object.
+* Vertical Pod autoscaling supports a maximum of 1,000 pods associated with `VerticalPodAutoscaler` objects per cluster.
+
+* VPA might recommend more resources than available in the cluster. As a result, this prevents the pod from being assigned to a node and run, because the node doesn't have sufficient resources. You can overcome this limitation by setting the *LimitRange* to the maximum available resources per namespace, which ensures pods don't ask for more resources than specified. Additionally, you can set maximum allowed resource recommendations per pod in a `VerticalPodAutoscaler` object. Be aware that VPA cannot fully overcome an insufficient node resource issue. The limit range is fixed, but the node resource usage is changed dynamically.
+
+* We don't recommend using Vertical Pod Autoscaler with [Horizontal Pod Autoscaler][horizontal-pod-autoscaler-overview], which scales based on the same CPU and memory usage metrics.
+
+* VPA Recommender only stores up to eight days of historical data.
+
+* VPA does not support JVM-based workloads due to limited visibility into actual memory usage of the workload.
+
+* It is not recommended or supported to run your own implementation of VPA alongside this managed implementation of VPA. Having an extra or customized recommender is supported.
+
+* AKS Windows containers are not supported.
## Before you begin * AKS cluster is running Kubernetes version 1.24 and higher.
-* The Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The Azure CLI version 2.52.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
* `kubectl` should be connected to the cluster you want to install VPA.
-## API Object
+## VPA overview
-The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported in this preview release is 0.11 can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
+### API object
-## Install the aks-preview Azure CLI extension
+The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported is 0.11 and higher, and can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
+The VPA object consists of three components:
-To install the aks-preview extension, run the following command:
+- **Recommender** - it monitors the current and past resource consumption and, based on it, provides recommended values for the containers' cpu and memory requests/limits. The **Recommender** monitors the metric history, Out of Memory (OOM) events, and the VPA deployment spec, and suggests fair requests. By providing a proper resource request and limits configuration, the limits are raised and lowered.
-```azurecli-interactive
-az extension add --name aks-preview
-```
+- **Updater** - it checks which of the managed pods have correct resources set and, if not, kills them so that they can be recreated by their controllers with the updated requests.
-Run the following command to update to the latest version of the extension released:
+- **VPA Admission controller** - it sets the correct resource requests on new pods (either created or recreated by their controller due to the Updater's activity).
-```azurecli-interactive
-az extension update --name aks-preview
-```
+### VPA admission controller
-## Register the 'AKS-VPAPreview' feature flag
+VPA admission controller is a binary that registers itself as a Mutating Admission Webhook. With each pod created, it gets a request from the apiserver and it evaluates if there's a matching VPA configuration, or find a corresponding one and use the current recommendation to set resource requests in the pod.
-Register the `AKS-VPAPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+A standalone job runs outside of the VPA admission controller, called `overlay-vpa-cert-webhook-check`. The `overlay-vpa-cert-webhook-check` is used to create and renew the certificates, and register the VPA admission controller as a `MutatingWebhookConfiguration`.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-VPAPreview"
-```
+For high availability, AKS supports two admission controller replicas.
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+### VPA object operation modes
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AKS-VPAPreview"
-```
+A Vertical Pod Autoscaler resource is inserted for each controller that you want to have automatically computed resource requirements. This is most commonly a *deployment*. There are four modes in which VPAs operate:
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+* `Auto` - VPA assigns resource requests during pod creation and updates existing pods using the preferred update mechanism. Currently, `Auto` is equivalent to `Recreate`, and also is the default mode. Once restart free ("in-place") update of pod requests is available, it may be used as the preferred update mechanism by the `Auto` mode. When using `Recreate` mode, VPA evicts a pod if it needs to change its resource requests. It may cause the pods to be restarted all at once, thereby causing application inconsistencies. You can limit restarts and maintain consistency in this situation by using a [PodDisruptionBudget][pod-disruption-budget].
+* `Recreate` - VPA assigns resource requests during pod creation as well as update existing pods by evicting them when the requested resources differ significantly from the new recommendation (respecting the Pod Disruption Budget, if defined). This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Otherwise, the `Auto` mode is preferred, which may take advantage of restart-free updates once they are available.
+* `Initial` - VPA only assigns resource requests during pod creation and never changes afterwards.
+* `Off` - VPA doesn't automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+## Deployment pattern during application development
+
+A common deployment pattern recommended for you if you're unfamiliar with VPA is to perform the following steps during application development in order to identify its unique resource utilization characteristics, test VPA to verify it is functioning properly, and test alongside other Kubernetes components to optimize resource utilization of the cluster.
+
+1. Set `updateMode = off` in your production cluster and run VPA in recommendation mode so you can test and gain familiarity with VPA. `UpdateMode = off` can avoid introducing a misconfiguration that can cause an outage.
+
+2. Establish observability first by collecting actual resource utilization telemetry over a given period of time. This helps you understand the behavior and signs of symptoms or issues from container and pod resources influenced by the workloads running on them.
+
+3. Get familiar with the monitoring data to understand the performance characteristics. Based on this insight, set the desired requests/limits accordingly and then in the next deployment or upgrade
+
+4. Set `updateMode` value to `Auto`, `Recreate`, or `Initial` depending on your requirements.
## Deploy, upgrade, or disable VPA on a cluster
vpa-updater-56f9bfc96f-jgq2g 1/1 Running 0 41m
## Test your Vertical Pod Autoscaler installation
-The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also created is a VPA config pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request.
+The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also a VPA config is created, pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request.
1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository.
The following steps create a deployment with two pods, each running a single con
Environment: <none> ```
-## Set Pod Autoscaler requests automatically
+## Set Pod Autoscaler requests
-Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on Pods when the updateMode is set to **Auto** or **Recreate**.
+Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on pods when the updateMode is set to a **Auto**. You can set a different value depending on your requirements and testing. In this example, updateMode is set to `Recreate`.
1. Enable VPA for your cluster by running the following command. Replace cluster name `myAKSCluster` with the name of your AKS cluster and replace `myResourceGroup` with the name of the resource group the cluster is hosted in.
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"] ```
- This manifest describes a deployment that has two Pods. Each Pod has one container that requests 100 milliCPU and 50 MiB of memory.
+ This manifest describes a deployment that has two pods. Each pod has one container that requests 100 milliCPU and 50 MiB of memory.
3. Create the pod with the [kubectl create][kubectl-create] command, as shown in the following example:
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
kind: Deployment name: vpa-auto-deployment updatePolicy:
- updateMode: "Auto"
+ updateMode: "Recreate"
```
- The `targetRef.name` value specifies that any Pod that is controlled by a deployment named `vpa-auto-deployment` belongs to this `VerticalPodAutoscaler`. The `updateMode` value of `Auto` means that the Vertical Pod Autoscaler controller can delete a Pod, adjust the CPU and memory requests, and then start a new Pod.
+ The `targetRef.name` value specifies that any pod that's controlled by a deployment named `vpa-auto-deployment` belongs to `VerticalPodAutoscaler`. The `updateMode` value of `Recreate` means that the Vertical Pod Autoscaler controller can delete a pod, adjust the CPU and memory requests, and then create a new pod.
6. Apply the manifest to the cluster using the [kubectl apply][kubectl-apply] command:
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
kubectl create -f azure-vpa-auto.yaml ```
-7. Wait a few minutes, and view the running Pods again by running the following [kubectl get][kubectl-get] command:
+7. Wait a few minutes, and view the running pods again by running the following [kubectl get][kubectl-get] command:
```bash kubectl get pods
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s ```
-8. Get detailed information about one of your running Pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your Pods that you retrieved in the previous step.
+8. Get detailed information about one of your running pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your pods that you retrieved in the previous step.
```bash kubectl get pod podName --output yaml
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
The results show the `target` attribute specifies that for the container to run optimally, it doesn't need to change the CPU or the memory target. Your results may vary where the target CPU and memory recommendation are higher.
- The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
+ The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a pod and replace it with a new pod. If a pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the pod and replaces it with a pod that meets the target attribute.
+
+## Extra Recommender for Vertical Pod Autoscaler
+
+In the VPA, one of the core components is the Recommender. It provides recommendations for resource usage based on real time resource consumption. AKS deploys a recommender when a cluster enables VPA. You can deploy a customized recommender or an extra recommender with the same image as the default one. The benefit of having a customized recommender is that you can customize your recommendation logic. With an extra recommender, you can partition VPAs to multiple recommenders if there are many VPA objects.
+
+The following example is an extra recommender that you apply to your existing AKS cluster. You then configure the VPA object to use the extra recommender.
+
+1. Create a file named `extra_recommender.yaml` and copy in the following manifest:
+
+ ```json
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: extra-recommender
+ namespace: kube-system
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: extra-recommender
+ template:
+ metadata:
+ labels:
+ app: extra-recommender
+ spec:
+ serviceAccountName: vpa-recommender
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ containers:
+ - name: recommender
+ image: registry.k8s.io/autoscaling/vpa-recommender:0.13.0
+ imagePullPolicy: Always
+ args:
+ - --recommender-name=extra-recommender
+ resources:
+ limits:
+ cpu: 200m
+ memory: 1000Mi
+ requests:
+ cpu: 50m
+ memory: 500Mi
+ ports:
+ - name: prometheus
+ containerPort: 8942
+ ```
+
+2. Deploy the `extra-recomender.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```bash
+ kubectl apply -f extra-recommender.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+3. Create a file named `hamnster_extra_recommender.yaml` and copy in the following manifest:
+
+ ```yml
+ apiVersion: "autoscaling.k8s.io/v1"
+ kind: VerticalPodAutoscaler
+ metadata:
+ name: hamster-vpa
+ spec:
+ recommenders:
+ - name: 'extra-recommender'
+ targetRef:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: hamster
+ updatePolicy:
+ updateMode: "Auto"
+ resourcePolicy:
+ containerPolicies:
+ - containerName: '*'
+ minAllowed:
+ cpu: 100m
+ memory: 50Mi
+ maxAllowed:
+ cpu: 1
+ memory: 500Mi
+ controlledResources: ["cpu", "memory"]
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: hamster
+ spec:
+ selector:
+ matchLabels:
+ app: hamster
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: hamster
+ spec:
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ containers:
+ - name: hamster
+ image: k8s.gcr.io/ubuntu-slim:0.1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 50Mi
+ command: ["/bin/sh"]
+ args:
+ - "-c"
+ - "while true; do timeout 0.5s yes >; sleep 0.5s; done"
+ ```
+
+ If `memory` is not specified in `controlledResources`, the Recommender doesn't respond to OOM events. In this case, you are only setting CPU in `controlledValues`. `controlledValues` allows you to choose whether to update the container's resource requests by `RequestsOnly` option, or both resource requests and limits using the `RequestsAndLimits` option. The default value is `RequestsAndLimits`. If you use the `RequestsAndLimits` option, **requests** are computed based on actual usage, and **limits** are calculated based on the current pod's request and limit ratio.
+
+ For example, if you start with a pod that requests 2 CPUs and limits to 4 CPUs, VPA always sets the limit to be twice as much as requests. The same principle applies to memory. When you use the `RequestsAndLimits` mode, it can serve as a blueprint for your initial application resource requests and limits.
+
+You can simplify VPA object by using Auto mode and computing recommendations for both CPU and Memory.
+
+4. Deploy the `hamster_extra-recomender.yaml` example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```bash
+ kubectl apply -f hamster_customized_recommender.yaml
+ ```
+
+5. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
+
+ ```bash
+ kubectl get --watch pods -l app=hamster
+ ````
+
+6. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations.
+
+ ```bash
+ kubectl describe pod hamster-<exampleID>
+ ```
+
+ The example output is a snippet of the information describing the pod:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+7. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information.
+
+ ```bash
+ kubectl describe vpa/hamster-vpa
+ ```
+
+ The example output is a snippet of the information about the resource utilization:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ Spec:
+ recommenders:
+ Name: customized-recommender
+ ```
+
+## Troubleshooting
+
+To diagnose problems with a VPA installation, perform the following steps.
+
+1. Check if all system components are running using the following command:
+
+ ```bash
+ kubectl --namespace=kube-system get pods|grep vpa
+ ```
+
+The output should list three pods - recommender, updater and admission-controller all with the state showing a status of `Running`.
+
+2. Confirm if the system components log any errors. For each of the pods returned by the previous command, run the following command:
+
+ ```bash
+ kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}'
+ ```
+
+3. Confirm that the custom resource definition was created by running the following command:
+
+ ```bash
+ kubectl get customresourcedefinition | grep verticalpodautoscalers
+ ```
## Next steps
-This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
+This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements.
+
+* You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
+
+* See the Vertical Pod Autoscaler [API reference] to learn more about the definitions for related VPA objects.
<!-- EXTERNAL LINKS --> [kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml
This article showed you how to automatically scale resource utilization, such as
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [github-autoscaler-repo-v011]: https://github.com/kubernetes/autoscaler/blob/vpa-release-0.11/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
+[pod-disruption-budget]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
<!-- INTERNAL LINKS --> [get-started-with-aks]: /azure/architecture/reference-architectures/containers/aks-start-here
This article showed you how to automatically scale resource utilization, such as
[az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show
+[horizontal-pod-autoscaler-overview]: concepts-scale.md#horizontal-pod-autoscaler
aks Windows Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-partner-solutions.md
Storage enables standardized and seamless storage interactions, ensuring high ap
![Logo of NetApp.](./media/windows-aks-partner-solutions/netapp.png)
-Astra provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads.
+[Astra](https://www.netapp.com/cloud-services/astra/) provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads.
Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-netapp-files-smb-volumes-for-azure-kubernetes-services/ba-p/3052900) post to dynamically provision SMB volumes for Windows AKS workloads.
app-service App Gateway With Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/app-gateway-with-service-endpoints.md
na Previously updated : 08/04/2021 Last updated : 09/29/2023 ms.devlang: azurecli # Application Gateway integration
-There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multi-tenant, Internal Load Balancer (ILB) App Service Environment and External App Service Environment. This article will walk through how to configure it with App Service (multi-tenant) using service endpoint to secure traffic. The article will also discuss considerations around using private endpoint and integrating with ILB, and External App Service Environment. Finally the article has considerations on scm/kudu site.
+There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multitenant, Internal Load Balancer (ILB) App Service Environment and External App Service Environment. This article walks through how to configure it with App Service (multitenant) using service endpoint to secure traffic. The article also discusses considerations around using private endpoint and integrating with ILB, and External App Service Environment. Finally the article has considerations on scm/kudu site.
-## Integration with App Service (multi-tenant)
-App Service (multi-tenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we'll use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
+## Integration with App Service (multitenant)
+App Service (multitenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
:::image type="content" source="./media/app-gateway-with-service-endpoints/service-endpoints-appgw.png" alt-text="Diagram shows the Internet flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a firewall icon to instances of apps in App Service.":::
-There are two parts to this configuration besides creating the App Service and the Application Gateway. The first part is enabling service endpoints in the subnet of the Virtual Network where the Application Gateway is deployed. Service endpoints will ensure all network traffic leaving the subnet towards the App Service will be tagged with the specific subnet ID. The second part is to set an access restriction of the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure it using different tools depending on preference.
+There are two parts to this configuration besides creating the App Service and the Application Gateway. The first part is enabling service endpoints in the subnet of the Virtual Network where the Application Gateway is deployed. Service endpoints ensure all network traffic leaving the subnet towards the App Service is tagged with the specific subnet ID. The second part is to set an access restriction of the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure it using different tools depending on preference.
## Using Azure portal
-With Azure portal, you follow four steps to provision and configure the setup. If you have existing resources, you can skip the first steps.
+With Azure portal, you follow four steps to create and configure the setup. If you have existing resources, you can skip the first steps.
1. Create an App Service using one of the Quickstarts in the App Service documentation, for example [.NET Core Quickstart](../quickstart-dotnetcore.md) 2. Create an Application Gateway using the [portal Quickstart](../../application-gateway/quick-create-portal.md), but skip the Add backend targets section. 3. Configure [App Service as a backend in Application Gateway](../../application-gateway/configure-web-app.md), but skip the Restrict access section. 4. Finally create the [access restriction using service endpoints](../../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule).
-You can now access the App Service through Application Gateway, but if you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
+You can now access the App Service through Application Gateway. If you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
:::image type="content" source="./media/app-gateway-with-service-endpoints/website-403-forbidden.png" alt-text="Screenshot shows the text of an Error 403 - Forbidden."::: ## Using Azure Resource Manager template
-The [Resource Manager deployment template][template-app-gateway-app-service-complete] will provision a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you'll have to clone the repo or download the template and edit it.
+The [Resource Manager deployment template][template-app-gateway-app-service-complete] creates a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you have to clone the repo or download the template and edit it.
-To apply the template you can use the Deploy to Azure button found in the description of the template, or you can use appropriate PowerShell/CLI.
+To apply the template, you can use the Deploy to Azure button found in the description of the template, or you can use appropriate PowerShell/CLI.
## Using Azure CLI
-The [Azure CLI sample](../../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) will provision an App Service locked down with service endpoints and access restriction to only receive traffic from Application Gateway. If you only need to isolate traffic to an existing App Service from an existing Application Gateway, the following command is sufficient.
+The [Azure CLI sample](../../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) creates an App Service locked down with service endpoints and access restriction to only receive traffic from Application Gateway. If you only need to isolate traffic to an existing App Service from an existing Application Gateway, the following command is sufficient.
```azurecli-interactive az webapp config access-restriction add --resource-group myRG --name myWebApp --rule-name AppGwSubnet --priority 200 --subnet mySubNetName --vnet-name myVnetName ```
-In the default configuration, the command will ensure both setup of the service endpoint configuration in the subnet and the access restriction in the App Service.
+In the default configuration, the command ensures both setup of the service endpoint configuration in the subnet and the access restriction in the App Service.
## Considerations when using private endpoint
-As an alternative to service endpoint, you can use private endpoint to secure traffic between Application Gateway and App Service (multi-tenant). You will need to ensure that Application Gateway can DNS resolve the private IP of the App Service apps or alternatively that you use the private IP in the backend pool and override the host name in the http settings.
+As an alternative to service endpoint, you can use private endpoint to secure traffic between Application Gateway and App Service (multitenant). You need to ensure that Application Gateway can DNS resolve the private IP of the App Service apps. Alternatively you can use the private IP in the backend pool and override the host name in the http settings.
:::image type="content" source="./media/app-gateway-with-service-endpoints/private-endpoint-appgw.png" alt-text="Diagram shows the traffic flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a private endpoint to instances of apps in App Service.":::
-Application Gateway will cache the DNS lookup results, so if you use FQDNs and rely on DNS lookup to get the private IP address, then you may need to restart the Application Gateway if the DNS update or link to Azure private DNS zone was done after configuring the backend pool. To restart the Application Gateway, you must start and stop the instance. You can do this with Azure CLI:
+Application Gateway caches the DNS lookup results. If you use FQDNs and rely on DNS lookup to get the private IP address, then you may need to restart the Application Gateway if the DNS update or link to Azure private DNS zone was done after configuring the backend pool. To restart the Application Gateway, you must start and stop the instance. You restart the Application Gateway using Azure CLI:
```azurecli-interactive az network application-gateway stop --resource-group myRG --name myAppGw
az network application-gateway start --resource-group myRG --name myAppGw
## Considerations for ILB ASE ILB App Service Environment isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB App Service Environment and integrates it with an Application Gateway using Azure portal.
-If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a Network security group (NSG) which affect all web apps in the App Service Environment. For the NSG, you are able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for App Service Environment to function correctly.
+If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a Network security group (NSG) which affect all web apps in the App Service Environment. For the NSG, you're able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for App Service Environment to function correctly.
-To isolate traffic to an individual web app you'll need to use ip-based access restrictions as service endpoints will not work for ASE. The IP address should be the private IP of the Application Gateway instance.
+To isolate traffic to an individual web app, you need to use IP-based access restrictions as service endpoints doesn't work with App Service Environment. The IP address should be the private IP of the Application Gateway instance.
## Considerations for External ASE
-External App Service Environment has a public facing load balancer like multi-tenant App Service. Service endpoints don't work for App Service Environment, and that's why you'll have to use ip-based access restrictions using the public IP of the Application Gateway instance. To create an External App Service Environment using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
+External App Service Environment has a public facing load balancer like multitenant App Service. Service endpoints don't work for App Service Environment, and that's why you have to use IP-based access restrictions using the public IP of the Application Gateway instance. To create an External App Service Environment using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for complete scenario"
If you want to use the same access restrictions as the main site, you can inheri
az webapp config access-restriction set --resource-group myRG --name myWebApp --use-same-restrictions-for-scm-site ```
-If you want to set individual access restrictions for the scm site, you can add access restrictions using the --scm-site flag like shown below.
+If you want to set individual access restrictions for the scm site, you can add access restrictions using the `--scm-site` flag like shown here.
```azurecli-interactive az webapp config access-restriction add --resource-group myRG --name myWebApp --scm-site --rule-name KudoAccess --priority 200 --ip-address 208.130.0.0/16 ```
+## Considerations when using default domain
+Configuring Application Gateway to override the host name and use the default domain of App Service (typically `azurewebsites.net`) is the easiest way to configure the integration and doesn't require configuring custom domain and certificate in App Service. [This article](/azure/architecture/best-practices/host-name-preservation) discusses the general considerations when overriding the original host name. In App Service, there are two scenarios where you need to pay attention with this configuration.
+
+### Authentication
+When you're using [the authentication feature](../overview-authentication-authorization.md) in App Service (also known as Easy Auth), your app will typically redirect to the sign-in page. Because App Service doesn't know the original host name of the request, the redirect would be done on the default domain name and usually result in an error. To work around default redirect, you can configure authentication to inspect a forwarded header and adapt the redirect domain to the original domain. Application Gateway uses a header called `X-Original-Host`.
+Using [file-based configuration](../configure-authentication-file-based.md) to configure authentication, you can configure App Service to adapt to the original host name. Add this configuration to your configuration file:
+
+```json
+{
+ ...
+ "httpSettings": {
+ "forwardProxy": {
+ "convention": "Custom",
+ "customHostHeaderName": "X-Original-Host"
+ }
+ }
+ ...
+}
+```
+
+### ARR affinity
+In multi-instance deployments, [ARR affinity](../configure-common.md?tabs=portal#configure-general-settings) ensures that client requests are routed to the same instance for the life of the session. ARR affinity doesn't work with host name overrides and you have to configure identical custom domain and certificate in App Service and in Application Gateway and not override host name for session affinity to work.
+ ## Next steps For more information on the App Service Environment, see [App Service Environment documentation](../environment/index.yml). To further secure your web app, information about Web Application Firewall on Application Gateway can be found in the [Azure Web Application Firewall documentation](../../web-application-firewall/ag/ag-overview.md).+
+Tutorial on [deploying a secure, resilient site with a custom domain](https://azure.github.io/AppService/2021/03/26/Secure-resilient-site-with-custom-domain) on App Service using either Azure Front Door or Application Gateway.
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to an App Service apps using Azure private endpoi
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 02/09/2023 Last updated : 09/29/2023
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
* FTP access is provided through the inbound public IP address. Private endpoint doesn't support FTP access to the app. * IP-Based SSL isn't supported with private endpoints. * Apps that you configure with private endpoints cannot use [service endpoint-based access restriction rules](../overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints).
+* Private endpoint naming must follow the rules defined for resources of type `Microsoft.Network/privateEndpoints`. Naming rules can be found [here](../../azure-resource-manager/management/resource-name-rules.md#microsoftnetwork).
We're improving Azure Private Link feature and private endpoint regularly, check [this article](../../private-link/private-endpoint-overview.md#limitations) for up-to-date information about limitations.
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
In your application code, you use the usual logging facilities to send log messa
``` By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](./webjobs-sdk-get-started.md#enable-console-logging)-- Python applications can use the [OpenCensus package](../azure-monitor/app/opencensus-python.md) to send logs to the application diagnostics log.
+- Python applications can use the [OpenCensus package](/previous-versions/azure/azure-monitor/app/opencensus-python) to send logs to the application diagnostics log.
## Stream logs
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
You need to complete the following tasks prior to deploying Application Gateway
helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace <helm-resource-namespace> \ --version 0.5.024542 \
+ --set albController.namespace=<alb-controller-namespace> \
--set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
for i in `seq 1 2`; do
--resource-group myResourceGroupAG \ --name myVM$i \ --nics myNic$i \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys \ --custom-data cloud-init.txt
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
In this example, you create a Virtual Machine Scale Set named *myvmss* that prov
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
In this example, you create a virtual machine scale set that supports the backen
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
In this example, you create a Virtual Machine Scale Set that provides servers fo
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
for i in `seq 1 2`; do
az vmss create \ --name myvmss$i \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
In this example, you create a Virtual Machine Scale Set that provides servers fo
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
for i in `seq 1 3`; do
az vmss create \ --name myvmss$i \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username <azure-user> \ --admin-password <password> \ --instance-count 2 \
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
for i in `seq 1 3`; do
az vmss create \ --name myvmss$i \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --admin-password Azure123456! \ --instance-count 2 \
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 09/26/2023 Last updated : 09/29/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
The currently supported versions of the `microsoft.flux` extension are described
### 1.7.7 (September 2023)
-> [!NOTE]
-> We have started to roll out this release across regions. We'll remove this note once version 1.7.6 is available to all supported regions.
- Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) - source-controller: v1.0.1
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md
Title: Azure Arc-enabled Kubernetes network requirements description: Learn about the networking requirements to connect Kubernetes clusters to Azure Arc. Previously updated : 08/15/2023 Last updated : 09/28/2023
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
This article supports both programming models.
# [Isolated worker model](#tab/isolated-process)
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+This code defines and initializes the `ILogger`:
+This example shows a [C# function](dotnet-isolated-process-guide.md) that receives a message and writes it to a second queue:
+ # [In-process model](#tab/in-process)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
This article supports both programming models.
# [Isolated worker model](#tab/isolated-process)
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+This code defines and initializes the `ILogger`:
+This example shows a [C# function](dotnet-isolated-process-guide.md) that receives a single Service Bus queue message and writes it to the logs:
++
+This example shows a [C# function](dotnet-isolated-process-guide.md) that receives multiple Service Bus queue messages in a single batch and writes each to the logs:
+ # [In-process model](#tab/in-process)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Storage accounts created as part of the function app create flow in the Azure po
+ When creating your function app in the portal, you're only allowed to choose an existing storage account in the same region as the function app you're creating. This is a performance optimization and not a strict limitation. To learn more, see [Storage account location](#storage-account-location). ++ When creating your function app on a plan with [availability zone support](../reliability/reliability-functions.md#availability-zone-support) enabled, only [zone-redundant storage accounts](../storage/common/storage-redundancy.md#zone-redundant-storage) are supported.+ ## Storage account guidance Every function app requires a storage account to operate. When that account is deleted, your function app won't run. To troubleshoot storage-related issues, see [How to troubleshoot storage-related issues](functions-recover-storage-account.md). The following other considerations apply to the Storage account used by function apps.
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
> * Show traffic data > * Add a ground overlay
-If migrating an existing web application, check to see if it's using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it's and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles] \| [satellite tiles]). The following links provide details on how to use Azure Maps in commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library such as Cesium, Leaflet, and OpenLayers. In such case, connect your application to the Azure Maps [Render] services ([road tiles] | [satellite tiles]). The following links provide details on how to use Azure Maps in commonly used open-source map control libraries.
* [Cesium] - A 3D map control for the web. <!--[Cesium code samples] \|--> [Cesium plugin] * [Leaflet] ΓÇô Lightweight 2D map control for the web. [Leaflet code samples] \| [Leaflet plugin]
Azure Maps more [open-source modules for the web SDK] that extend its capabiliti
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of:
-* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control] in the Web SDK documentation. This package also includes TypeScript definitions.
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control]. This package also includes TypeScript definitions.
* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release. > [!TIP]
Loading a map in both SDKs follows the same set of steps;
**Key differences**
-* Bing maps require an account key specified in the script reference of the API or as a map option. Authentication credentials for Azure Maps are specified as options of the map class as either [Shared Key authentication] or [Azure Active Directory].
+* Bing maps require an account key specified in the script reference of the API or as a map option. Authentication credentials for Azure Maps are specified as options of the map class as either [Shared Key authentication] or [Azure AD].
* Bing Maps takes in a callback function in the script reference of the API that is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used. * When using an ID to reference the `div` element that the map is rendered in, Bing Maps uses an HTML selector (`#myMap`), whereas Azure Maps only uses the ID value (`myMap`). * Coordinates in Azure Maps are defined as Position objects that can be specified as a simple number array in the format `[longitude, latitude]`.
Microsoft.Maps.loadModule('Microsoft.Maps.Traffic', function () {
**After: Azure Maps**
-Azure Maps provides several different options for displaying traffic. Traffic incidents, such as road closures and accidents can be displayed as icons on the map. Traffic flow, color coded roads, can be overlaid on the map and the colors can be modified to be based relative to the posted speed limit, relative to the normal expected delay, or absolute delay. Incident data in Azure Maps is updated every minute and flow data every 2 minutes.
+Azure Maps provides several different options for displaying traffic. Traffic incidents, such as road closures and accidents can be displayed as icons on the map. Traffic flow, color coded roads, can be overlaid on the map and the colors can be modified relative to the posted speed limit, relative to the normal expected delay, or absolute delay. Incident data in Azure Maps is updated every minute and flow data every 2 minutes.
```javascript map.setTraffic({
Learn more about migrating from Bing Maps to Azure Maps.
[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions- [atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number- [atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
-[Azure Active Directory]: azure-maps-authentication.md#azure-ad-authentication
+[Azure AD]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Glossary]: glossary.md [Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
Learn more about migrating from Bing Maps to Azure Maps.
[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes [Pushpin clustering]: #pushpin-clustering
+[Render]: /rest/api/maps/render-v2
[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins [road tiles]: /rest/api/maps/render-v2/get-map-tile
-[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[satellite tiles]: /rest/api/maps/render-v2/get-map-static-image
[Setting the map view]: #setting-the-map-view [Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication [Show traffic data]: #show-traffic-data
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
The following table provides the Azure Maps service APIs that provide similar fu
| Bing Maps service API | Azure Maps service API | ||-| | Autosuggest | [Search] |
-| Directions (including truck) | [Route directions] |
-| Distance Matrix | [Route Matrix] |
-| Imagery ΓÇô Static Map | [Render] |
-| Isochrones | [Route Range] |
-| Local Insights | [Search] + [Route Range] |
+| Directions (including truck) | [Get Route Directions] |
+| Distance Matrix | [Post Route Matrix] |
+| Imagery ΓÇô Static Map | [Get Map Static Image] |
+| Isochrones | [Get Route Range] |
+| Local Insights | [Search] + [Get Route Range] |
| Local Search | [Search] | | Location Recognition (POIs) | [Search] | | Locations (forward/reverse geocoding) | [Search] |
-| Snap to Road | [POST Route directions] |
+| Snap to Road | [Post Route Directions] |
| Spatial Data Services (SDS) | [Search] + [Route] + other Azure Services |
-| Time Zone | [Time Zone] |
-| Traffic Incidents | [Traffic Incident Details] |
+| Time Zone | [Timezone] |
+| Traffic Incidents | [Get Traffic Incident Detail] |
The following service APIs aren't currently available in Azure Maps:
Azure Maps also has these REST web
* [Azure Maps Creator] ΓÇô Create a custom private digital twin of buildings and spaces. * [Spatial operations] ΓÇô Offload complex spatial calculations and operations, such as geofencing, to a service.
-* [Map Tiles] ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
-* [Batch routing] ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
+* [Get Map Tile] ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
+* [Post Route Directions Batch] ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
* [Traffic] Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles. * [Geolocation API] ΓÇô Get the location of an IP address. * [Weather services] ΓÇô Gain access to real-time and forecast weather data.
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Geocoding addresses
Geocoding is the process of converting an address (like `"1 Microsoft way, Redmo
Azure Maps provides several methods for geocoding addresses:
-* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address]: Free-form address geocoding is used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Address Structured]: Structured address geocoding is used to specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Post Search Address Batch]: Use batch address geocoding to create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following tables cross-reference the Bing Maps API parameters with the comparable API parameters in Azure Maps for structured and free-form address geocoding.
Reverse geocoding is the process of converting geographic coordinates (like long
Azure Maps provides several reverse geocoding methods:
-* [Address reverse geocoder]: Specify a single geographic coordinate to get its approximate address and process the request immediately.
-* [Cross street reverse geocoder]: Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
-* [Batch address reverse geocoder]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address Reverse]: Specify a single geographic coordinate to get its approximate address and process the request immediately.
+* [Get Search Address Reverse Cross Street]: Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
+* [Post Search Address Reverse Batch]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross references the Bing Maps entity type values to the equ
Several of the Azure Maps search APIΓÇÖs support predictive mode that can be used for autosuggest scenarios. The Azure Maps [fuzzy search] API is the most like the Bing Maps Autosuggest API. The following APIs also support predictive mode, add `&typeahead=true` to the query:
-* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
-* [POI category search]: Search for points of interests by category. For example, "restaurant".
+* [Get Search Address]: A free-form address geocoding used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Get Search POI]: The point of interest (POI) search is used to search for points of interests by name. For example, `"starbucks"`.
+* [Get Search POI Category]: The point of interest (POI) category search is used to search for points of interests by category. For example, "restaurant".
## Calculate routes and directions
Azure Maps can be used to calculate routes and directions. Azure Maps has many o
The Azure Maps routing service provides the following APIs for calculating routes:
-* [Calculate route]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
-* [Batch route]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Route Directions]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
+* [Post Route Directions Batch]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
There are several ways to snap coordinates to roads in Azure Maps.
**Using the route direction API to snap coordinates**
-Azure Maps can snap coordinates to roads by using the [route directions] API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
+Azure Maps can snap coordinates to roads by using the [Get Route Directions] API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
There are two different ways to use the route directions API to snap coordinates to roads.
The Azure Maps vector tiles contain the raw road geometry data that can be used
## Retrieve a map image (Static Map)
-Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Map image render] API is comparable to the static map API in Bing Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Get Map Static Image] API is comparable to the static map API in Bing Maps.
> [!NOTE] > Azure Maps requires the center, all pushpins and path locations to be coordinates in `longitude,latitude` format whereas Bing Maps uses the `latitude,longitude` format. Addresses will need to be geocoded first.
For more information, see [Render custom data on a raster map].
In addition to being able to generate a static map image, the Azure Maps render service also enables direct access to map tiles in raster (PNG) and vector format:
-* [Map tiles] ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
-* [Map imagery tile] ΓÇô Retrieve aerial and satellite imagery tiles.
+* [Get Map Static Image] ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* [Get Map Tile] ΓÇô Retrieve aerial and satellite imagery tiles.
### Pushpin URL parameter format comparison
For example, in Azure Maps, a blue line with 50% opacity and a thickness of four
Azure Maps provides an API for calculating the travel times and distances between a set of locations as a distance matrix. The Azure Maps distance matrix API is comparable to the distance matrix API in Bing Maps:
-* [Route matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
+* [Post Route Matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
> [!NOTE] > A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
The following table cross-references the Bing Maps API parameters with the compa
Point of interest data can be searched in Bing Maps by using the following APIs:
-* **Local search**: Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [POI search] and [POI category search] APIs are most like this API.
+* **Local search**: Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [Get Search POI] and [Get Search POI Category] APIs are most like this API.
* **Location recognition**: Searches for points of interests that are within a certain distance of a location. The Azure Maps [nearby search] API is most like this API. * **Local insights**: Searches for points of interests that are within a specified maximum driving time or distance from a specific coordinate. This is achievable with Azure Maps by first calculating an isochrone and then passing it into the [Search within geometry] API. Azure Maps provides several search APIs for points of interest:
-* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
-* [POI category search]: Search for points of interests by category. For example, "restaurant".
-* [Search within geometry]: Searches for points of interests that are within a certain distance of a location.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Search within geometry]: Search for points of interests that are within a specified geometry (polygon).
-* [Search along route]: Search for points of interests that are along a specified route path.
-* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search POI]: The point of interest (POI) search is used to search for points of interests by name. For example, `"starbucks"`.
+* [Get Search POI Category]: The point of interest (POI) category search is used to search for points of interests by category. For example, "restaurant".
+* [Post Search Inside Geometry]: Searches for points of interests that are within a certain distance of a location or within a specified geometry (polygon).
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Along Route]: Search for points of interests that are along a specified route path.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
For more information on searching in Azure Maps, see [Best practices for Azure Maps Search service].
Bing Maps provides traffic flow and incident data in its interactive map control
Traffic data is also integrated into the Azure Maps interactive map controls. Azure maps also provides the following traffic services APIs:
-* [Traffic flow segments]: Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
-* [Traffic flow tiles]: Provides raster and vector tiles containing traffic flow data. These
+* [Get Traffic Flow Segment]: Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
+* [Get Traffic Flow Tile]: Provides raster and vector tiles containing traffic flow data. These
can be used with the Azure Maps controls or in third-party map controls such as Leaflet. The vector tiles can also be used for advanced data analysis.
-* [Traffic incident details]: Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
-* [Traffic incident tiles]: Provides raster and vector tiles containing traffic incident data.
-* [Traffic incident viewport]: Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
+* [Get Traffic Incident Detail]: Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
+* [Get Traffic Incident Tile]: Provides raster and vector tiles containing traffic incident data.
+* [Get Traffic Incident Viewport]: Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
The following table cross-references the Bing Maps traffic API parameters with the comparable traffic incident details API parameters in Azure Maps.
The following table cross-references the Bing Maps traffic API parameters with t
Azure Maps provides an API for retrieving the time zone a coordinate is in. The Azure Maps time zone API is comparable to the time zone API in Bing Maps.
-* [Time zone by coordinate]: Specify a coordinate and get the details for the time zone it falls in.
+* [Get Timezone By Coordinates]: Specify a coordinate and get the details for the time zone it falls in.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Bing Maps API parameters with the compa
In addition to this the Azure Maps platform also provides many other time zone APIs to help with conversions with time zone names and IDs:
-* [Time zone by ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
-* [Time zone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
-* [Time zone Enum Windows]: Returns a full list of Windows Time Zone IDs.
-* [Time zone IANA version]: Returns the current IANA version number used by Azure Maps.
-* [Time zone Windows to IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* [Get Timezone By ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* [Get Timezone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* [Get Timezone Enum Windows]: Returns a full list of Windows Time Zone IDs.
+* [Get Timezone IANA Version]: Returns the current IANA version number used by Azure Maps.
+* [Get Timezone Windows To IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Spatial Data Services (SDS)
Another option for geocoding a large number addresses with Azure Maps is to make
> > Gen1 pricing tier is now deprecated and will be retired on 9/15/26. Gen2 pricing tier replaces Gen1 (both S0 and S1). If your Azure Maps account has Gen1 pricing tier selected, you can switch to Gen2 pricing tier before itΓÇÖs retired, otherwise it will automatically be updated. For more information on the Gen1 pricing tier retirement, see [Manage the pricing tier of your Azure Maps account].
-* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address]: Free-form address geocoding is used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Address Structured]: Structured address geocoding is used to specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Post Search Address Batch]: Use batch address geocoding to create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
### Get administrative boundary data
To recap:
1. Pass a query for the boundary you want to receive into one of the following search APIs.
- * [Free-form address geocoding]
- * [Structured address geocoding]
- * [Batch address geocoding]
- * [Fuzzy search]
- * [Fuzzy batch search]
+ * [Get Search Address] (Free-form address geocoding)
+ * [Get Search Address Structured] (Structured address geocoding)
+ * [Post Search Address Batch] (Batch address geocoding)
+ * [Post Search Fuzzy Batch] (Fuzzy search)
+ * [Post Search Fuzzy Batch] (Fuzzy batch search)
-1. If the desired result(s) has a geometry ID(s), pass it into the [Search Polygon API].
+1. If the desired result(s) has a geometry ID(s), pass it into the [Get Search Polygon] API.
### Host and query spatial business data
No resources to be cleaned up.
Learn more about the Azure Maps REST services. > [!div class="nextstepaction"]
-> [Best practices for using the search service](how-to-use-best-practices-for-search.md)
+> [Best practices for Azure Maps Search service]
-[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
[Authentication with Azure Maps]: azure-maps-authentication.md [Azure Cosmos DB geospatial capabilities overview]: ../cosmos-db/sql-query-geospatial-intro.md [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
Learn more about the Azure Maps REST services.
[Azure SQL Spatial ΓÇô Query nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor [Azure SQL Spatial Data Types overview]: /sql/relational-databases/spatial/spatial-data-types-overview [Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
-[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
-[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
-[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
-[Batch routing]: /rest/api/maps/route/postroutedirectionsbatchpreview
[Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md [Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
-[Calculate route]: /rest/api/maps/route/getroutedirections
-[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
[free account]: https://azure.microsoft.com/free/
-[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
-[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
-[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[fuzzy search]: /rest/api/maps/search/get-search-fuzzy
[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
+[Get Route Directions]: /rest/api/maps/route/get-route-directions
+[Get Route Range]: /rest/api/maps/route/get-route-range
+[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street
+[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse
+[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured
+[Get Search Address]: /rest/api/maps/search/get-search-address
+[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy
+[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category
+[Get Search POI]: /rest/api/maps/search/get-search-poi
+[Get Search Polygon]: /rest/api/maps/search/get-search-polygon
+[Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates
+[Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id
+[Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana
+[Get Timezone Enum Windows]: /rest/api/maps/timezone/get-timezone-enum-windows
+[Get Timezone IANA Version]: /rest/api/maps/timezone/get-timezone-iana-version
+[Get Timezone Windows To IANA]: /rest/api/maps/timezone/get-timezone-windows-to-iana
+[Get Traffic Flow Segment]: /rest/api/maps/traffic/get-traffic-flow-segment
+[Get Traffic Flow Tile]: /rest/api/maps/traffic/get-traffic-flow-tile
+[Get Traffic Incident Detail]: /rest/api/maps/traffic/get-traffic-incident-detail
+[Get Traffic Incident Tile]: /rest/api/maps/traffic/get-traffic-incident-tile
+[Get Traffic Incident Viewport]: /rest/api/maps/traffic/get-traffic-incident-viewport
[Localization support in Azure Maps]: supported-languages.md
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
-[Map image render]: /rest/api/maps/render/getmapimagerytile
-[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Map Tiles]: /rest/api/maps/render-v2/get-map-tile
[nearby search]: /rest/api/maps/search/getsearchnearby [NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite
-[POI category search]: /rest/api/maps/search/get-search-poi-category
-[POI search]: /rest/api/maps/search/get-search-poi
-[POST Route directions]: /rest/api/maps/route/postroutedirections
+[Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch
+[Post Route Directions]: /rest/api/maps/route/post-route-directions
+[Post Route Matrix]: /rest/api/maps/route/post-route-matrix
+[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch
+[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch
+[Post Search Along Route]: /rest/api/maps/search/post-search-along-route
+[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch
+[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry
[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md [Render custom data on a raster map]: how-to-render-custom-data.md
-[Render]: /rest/api/maps/render-v2/get-map-static-image
-[Route directions]: /rest/api/maps/route/getroutedirections
-[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
-[Route Range]: /rest/api/maps/route/getrouterange
[Route]: /rest/api/maps/route
-[Search along route]: /rest/api/maps/search/postsearchalongroute
[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[Search Polygon API]: /rest/api/maps/search/getsearchpolygon
-[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Search within geometry]: /rest/api/maps/search/post-search-inside-geometry
[Search]: /rest/api/maps/search [Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path [Spatial operations]: /rest/api/maps/spatial
-[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Supported map styles]: supported-map-styles.md
-[Time zone by coordinate]: /rest/api/maps/timezone/gettimezonebycoordinates
-[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
-[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
-[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
-[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
-[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
-[Time Zone]: /rest/api/maps/timezone
-[Traffic flow segments]: /rest/api/maps/traffic/gettrafficflowsegment
-[Traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile
-[Traffic incident details]: /rest/api/maps/traffic/gettrafficincidentdetail
-[Traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile
-[Traffic incident viewport]: /rest/api/maps/traffic/gettrafficincidentviewport
+[Timezone]: /rest/api/maps/timezone
[Traffic]: /rest/api/maps/traffic [turf js]: https://turfjs.org [Weather services]: /rest/api/maps/weather
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Title: 'Tutorial - Migrate a web app from Google Maps to Microsoft Azure Maps'
description: Tutorial on how to migrate a web app from Google Maps to Microsoft Azure Maps Previously updated : 12/07/2020 Last updated : 09/28/2023
Also:
> * Best practices to improve performance and user experience. > * Tips on how to make your application using more advanced features available in Azure Maps.
-If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles]
-\| [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps [Render] services ([road tiles] | [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
* Cesium - A 3D map control for the web. [Cesium documentation]. * Leaflet ΓÇô Lightweight 2D map control for the web. [Leaflet code sample] \| [Leaflet documentation].
If migrating an existing web application, check to see if it's using an open-sou
If developing using a JavaScript framework, one of the following open-source projects may be useful:
-* [ng-azure-maps] - Angular 10 wrapper around Azure maps.
+* [ng-azure-maps] - Angular 10 wrapper around Azure Maps.
* [AzureMapsControl.Components] - An Azure Maps Blazor component. * [Azure Maps React Component] - A react wrapper for the Azure Maps control. * [Vue Azure Maps] - An Azure Maps component for Vue application.
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Key features support
The following are some key differences between the Google Maps and Azure Maps We
* You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order ensures that all the map resources have been loaded and are ready to be accessed. * Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps. * Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.
-* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [*atlas.data* namespace]. There's also the [*atlas.Shape*] class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.
+* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [atlas.data] namespace. There's also the [atlas.Shape] class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.
* Coordinates in Azure Maps are defined as Position objects. A coordinate is specified as a number array in the format `[longitude,latitude]`. Or, it's specified using new atlas.data.Position(longitude, latitude). > [!TIP] > The Position class has a static helper method for importing coordinates that are in "latitude, longitude" format. The [atlas.data.Position.fromLatLng] method can often be replaced with the `new google.maps.LatLng` method in Google Maps code.
Both SDKs have the same steps to load a map:
**Some key differences**
-* Google maps requires an account key to be specified in the script reference of the API. Authentication credentials for Azure Maps are specified as options of the map class. This credential can be a subscription key or Azure Active Directory information.
+* Google Maps requires an account key to be specified in the script reference of the API. Authentication credentials for Azure Maps are specified as options of the map class. This credential can be a subscription key or Azure Active Directory information.
* Google Maps accepts a callback function in the script reference of the API, which is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used. * When referencing the `div` element in which the map renders, the `Map` class in Azure Maps only requires the `id` value while Google Maps requires a `HTMLElement` object. * Coordinates in Azure Maps are defined as Position objects, which can be specified as a simple number array in the format `[longitude, latitude]`.
Both SDKs have the same steps to load a map:
* Azure Maps doesn't add any navigation controls to the map canvas. So, by default, a map doesn't have zoom buttons and map style buttons. But, there are control options for adding a map style picker, zoom buttons, compass or rotation control, and a pitch control. * An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This event fires when the map has finished loading the WebGL context and all the needed resources. Add any code you want to run after the map completes loading, to this event handler.
-The basic examples below uses Google Maps to load a map centered over New York at coordinates. The longitude: -73.985, latitude: 40.747, and the map is at zoom level of 12.
+The following examples use Google Maps to load a map centered over New York at coordinates. The longitude: -73.985, latitude: 40.747, and the map is at zoom level of 12.
#### Before: Google Maps
Running this code in a browser displays a map that looks like the following imag
For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control]. > [!NOTE]
-> Unlike Google Maps, Azure Maps does not require an initial center and a zoom level to load the map. If this information is not provided when loading the map, Azure maps will try to determine city of the user. It will center and zoom the map there.
+> Unlike Google Maps, Azure Maps does not require an initial center and a zoom level to load the map. If this information is not provided when loading the map, Azure Maps will try to determine city of the user. It will center and zoom the map there.
**More resources:**
map.events.add('click', marker, function () {
Google Maps supports loading and dynamically styling GeoJSON data via the `google.maps.Data` class. The functionality of this class aligns more with the data-driven styling of Azure Maps. But, there's a key difference. With Google Maps, you specify a callback function. The business logic for styling each feature it processed individually in the UI thread. But in Azure Maps, layers support specifying data-driven expressions as styling options. These expressions are processed at render time on a separate thread. The Azure Maps approach improves rendering performance. This advantage is noticed when larger data sets need to be rendered quickly.
-The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle is red. If it's greater or equal to three, but less than five, the circle is orange. If it's less than three, the circle is green. The radius of each circle will be the exponential of the magnitude multiplied by 0.1.
+The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle is red. If it's greater or equal to three, but less than five, the circle is orange. If it's less than three, the circle is green. The radius of each circle is the exponential of the magnitude multiplied by 0.1.
#### Before: Google Maps
GeoJSON is the base data type in Azure Maps. Import it into a data source using
### Marker clustering
-When visualizing many data points on the map, points may overlap each other. Overlapping makes the map looks cluttered, and the map becomes difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Cluster data points to improve user experience and map performance.
+When visualizing many data points on the map, points may overlap each other. Overlapping makes the map look cluttered, and the map becomes difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Cluster data points to improve user experience and map performance.
In the following examples, the code loads a GeoJSON feed of earthquake data from the past week and adds it to the map. Clusters are rendered as scaled and colored circles. The scale and color of the circles depends on the number of points they contain.
map.layers.add(new atlas.layer.TileLayer({
### Show traffic data
-Traffic data can be overlaid both Azure and Google maps.
+Traffic data can be overlaid both Azure and Google Maps.
#### Before: Google Maps
If you select one of the traffic icons in Azure Maps, more information is displa
### Add a ground overlay
-Both Azure and Google maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They're great for building floor plans, overlaying old maps, or imagery from a drone.
+Both Azure and Google Maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They're great for building floor plans, overlaying old maps, or imagery from a drone.
#### Before: Google Maps
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
### Add KML data to the map
-Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle larger KML files.
+Both Azure and Google Maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle larger KML files.
#### Before: Google Maps
Learn more about migrating to Azure Maps:
> [!div class="nextstepaction"] > [Migrate a web service]
-[*atlas.data* namespace]: /javascript/api/azure-maps-control/atlas.data
-[*atlas.Shape*]: /javascript/api/azure-maps-control/atlas.shape
+[atlas.data]: /javascript/api/azure-maps-control/atlas.data
+[atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
[`atlas.layer.ImageLayer.getCoordinatesFromEdges`]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number- [Add a Bubble layer]: map-add-bubble-layer.md [Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
Learn more about migrating to Azure Maps:
[Load a map]: #load-a-map [Localization support in Azure Maps]: supported-languages.md [Localizing the map]: #localizing-the-map
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[Marker clustering]: #marker-clustering [Migrate a web service]: migrate-from-google-maps-web-services.md [ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
Learn more about migrating to Azure Maps:
[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions [Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Render]:  /rest/api/maps/render-v2
[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins [road tiles]: /rest/api/maps/render-v2/get-map-tile
-[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[satellite tiles]: /rest/api/maps/render-v2/get-map-static-image
[Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui [Search for points of interest]: map-search-location.md [Setting the map view]: #setting-the-map-view
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Title: 'Tutorial - Migrate web services from Google Maps | Microsoft Azure Maps'
description: Tutorial on how to migrate web services from Google Maps to Microsoft Azure Maps Previously updated : 06/23/2021 Last updated : 09/28/2023
The table shows the Azure Maps service APIs, which have a similar functionality
| Google Maps service API | Azure Maps service API | |-|| | Directions | [Route] |
-| Distance Matrix | [Route Matrix] |
+| Distance Matrix | [Post Route Matrix] |
| Geocoding | [Search] | | Places Search | [Search] | | Place Autocomplete | [Search] | | Snap to Road | See [Calculate routes and directions] section. | | Speed Limits | See [Reverse geocode a coordinate] section. | | Static Map | [Render] |
-| Time Zone | [Time Zone] |
+| Time Zone | [Timezone] |
The following service APIs aren't currently available in Azure Maps: * Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but doesn't currently support cell tower or WiFi triangulation. * Places details and photos - Phone numbers and website URL are available in the Azure Maps search API. * Map URLs
-* Nearest Roads - This is achievable using the Web SDK as demonstrated in the [Basic snap to road logic] sample, but is not currently available as a service.
+* Nearest Roads - Achievable using the Web SDK as demonstrated in the [Basic snap to road logic] sample, but isn't currently available as a service.
* Static street view Azure Maps has several other REST web services that may be of interest:
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Geocoding addresses
Geocoding is the process of converting an address into a coordinate. For example
Azure Maps provides several methods for geocoding addresses:
-* **[Free-form address geocoding]**: Specify a single address string and process the request immediately. "1 Microsoft way, Redmond, WA" is an example of a single address string. This API is recommended if you need to geocode individual addresses quickly.
-* **[Structured address geocoding]**: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* **[Batch address geocoding]**: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.
-* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string. This string can be an address, place, landmark, point of interest, or point of interest category. This API process the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
-* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Search Address]: Free-form address geocoding is used to specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Get Search Address Structured]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Post Search Address Batch]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+ The following table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps. | Google Maps API parameter | Comparable Azure Maps API parameter | ||--|
-| `address` | `query` |
-| `bounds` | `topLeft` and `btmRight` |
-| `components` | `streetNumber`<br/>`streetName`<br/>`crossStreet`<br/>`postalCode`<br/>`municipality` - city / town<br/>`municipalitySubdivision` ΓÇô neighborhood, sub / super city<br/>`countrySubdivision` - state or province<br/>`countrySecondarySubdivision` - county<br/>`countryTertiarySubdivision` - district<br/>`countryCode` - two letter country/region code |
-| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
-| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
-| `region` | `countrySet` |
+| `address` | `query` |
+| `bounds` | `topLeft` and `btmRight` |
+| `components` | `streetNumber`<br/>`streetName`<br/>`crossStreet`<br/>`postalCode`<br/>`municipality` - city / town<br/>`municipalitySubdivision` ΓÇô neighborhood, sub / super city<br/>`countrySubdivision` - state or province<br/>`countrySecondarySubdivision` - county<br/>`countryTertiarySubdivision` - district<br/>`countryCode` - two letter country/region code |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `region` | `countrySet` |
For more information on using the search service, see [Search for a location using Azure Maps Search services]. Be sure to review [best practices for search].
Reverse geocoding is the process of converting geographic coordinates into an ap
Azure Maps provides several reverse geocoding methods:
-* **[Address reverse geocoder]**: Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.
-* **[Cross street reverse geocoder]**: Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets 1st Ave and Main St.
-* **[Batch address reverse geocoder]**: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data is processed in parallel on the server. When the request completes, you can download the full set of results.
+* [Get Search Address Reverse]: Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.
+* [Get Search Address Reverse Cross Street]: Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets: 1st Ave and Main St.
+* [Post Search Address Reverse Batch]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data is processed in parallel on the server. When the request completes, you can download the full set of results.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
Point of interest data can be searched in Google Maps using the Places Search AP
Azure Maps provides several search APIs for points of interest:
-* **[POI search]**: Search for points of interests by name. For example, "Starbucks".
-* **[POI category search]**: Search for points of interests by category. For example, "restaurant".
-* **[Nearby search]**: Searches for points of interests that are within a certain distance of a location.
-* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category. It processes the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
-* **[Search within geometry]**: Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.
-* **[Search along route]**: Search for points of interests that are along a specified route path.
-* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests. Processed the request over a period of time. All data is processed in parallel on the server. When the request completes processing, you can download the full set of result.
+* [Get Search POI]: Search for points of interests by name. For example, "Starbucks".
+* [Get Search POI Category]: Search for points of interests by category. For example, "restaurant".
+* [Get Search Nearby]: Searches for points of interests that are within a certain distance of a location.
+* [Get Search Fuzzy]: The fuzzy search API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Post Search Inside Geometry]: Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.
+* [Post Search Along Route]: Search for points of interests that are along a specified route path.
+* [Post Search Fuzzy Batch]: Use the fuzzy batch search to create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
Currently Azure Maps doesn't have a comparable API to the Text Search API in Google Maps.
For more information, see [best practices for search].
### Find place from text
-Use the Azure Maps [POI search] and [Fuzzy search] to search for points of interests by name or address.
+Use the Azure Maps [Get Search POI] and [Get Search Fuzzy] to search for points of interests by name or address.
The table cross-references the Google Maps API parameters with the comparable Azure Maps API parameters.
The table cross-references the Google Maps API parameters with the comparable Az
### Nearby search
-Use the [Nearby search] API to retrieve nearby points of interests, in Azure Maps.
+Use the [Get Search Nearby] API to retrieve nearby points of interests, in Azure Maps.
The table shows the Google Maps API parameters with the comparable Azure Maps API parameters.
Calculate routes and directions using Azure Maps. Azure Maps has many of the sam
* Arrival and departure times. * Real-time and predictive based traffic routes.
-* Different modes of transportation. Such as, driving, walking, bicycling.
+* Different modes of transportation. Such as driving, walking and bicycling.
> [!NOTE] > Azure Maps requires all waypoints to be coordinates. Addresses must be geocoded first. The Azure Maps routing service provides the following APIs for calculating routes:
-* **[Calculate route]**: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The `POST` Route Direction in Azure Maps has an option can that take in thousands of [supporting points] and use them to recreate a logical route path between them (snap to road).
-* **[Batch route]**: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
+* [Get Route Directions]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The `POST` Route Direction in Azure Maps has an option can that take in thousands of [supporting points] and use them to recreate a logical route path between them (snap to road).
+* [Post Route Directions Batch]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The table cross-references the Google Maps API parameters with the comparable AP
> [!TIP] > By default, the Azure Maps route API only returns a summary. It returns the distance and times and the coordinates for the route path. Use the `instructionsType` parameter to retrieve turn-by-turn instructions. And, use the `routeRepresentation` parameter to filter out the summary and route path.
-Azure Maps routing API has other features that aren't available in Google Maps. When migrating your app, consider using these features, you might find them useful.
+Azure Maps routing API has other features that aren't available in Google Maps. When migrating your app, consider using these features:
* Support for route type: shortest, fastest, trilling, and most fuel efficient.
-* Support for other travel modes: bus, motorcycle, taxi, truck, and van.
+* Support for other travel modes: bus, motorcycle, taxi, truck and van.
* Support for 150 waypoints. * Compute multiple travel times in a single request; historic traffic, live traffic, no traffic. * Avoid other road types: carpool roads, unpaved roads, already used roads. * Specify custom areas to avoid.
-* Limit the elevation, which the route may ascend.
-* Route based on engine specifications. Calculate routes for combustion or electric vehicles based on engine specifications, and the remaining fuel or charge.
-* Support commercial vehicle route parameters. Such as, vehicle dimensions, weight, number of axels, and cargo type.
+* Limit the elevation that the route may ascend.
+* Route based on engine specifications. Calculate routes for combustion or electric vehicles based on engine specifications and the remaining fuel or charge.
+* Support commercial vehicle route parameters. Such as vehicle dimensions, weight, number of axels and cargo type.
* Specify maximum vehicle speed.
-In addition, the route service in Azure Maps supports [calculating routable ranges]. Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
+In addition, the route service in Azure Maps supports [Get Route Range]. Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
For more information, see [best practices for routing]. ## Retrieve a map image
-Azure Maps provides an API for rendering the static map images with data overlaid. The [Map image render] API in Azure Maps is comparable to the static map API in Google Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The [Get Map Static Image] API in Azure Maps is comparable to the static map API in Google Maps.
> [!NOTE] > Azure Maps requires the center, all the marker, and the path locations to be coordinates in "longitude,latitude" format. Whereas, Google Maps uses the "latitude,longitude" format. Addresses will need to be geocoded first.
The table cross-references the Google Maps API parameters with the comparable AP
| Google Maps API parameter | Comparable Azure Maps API parameter | ||--|
-| `center` | `center` |
-| `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. |
-| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
-| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
-| `maptype` | `layer` and `style` ΓÇô See [Supported map styles](supported-map-styles.md) documentation. |
-| `markers` | `pins` |
-| `path` | `path` |
-| `region` | *N/A* ΓÇô This is a geocoding related feature. Use the `countrySet` parameter when using the Azure Maps geocoding API. |
-| `scale` | *N/A* |
-| `size` | `width` and `height` ΓÇô can be up to 8192x8192 in size. |
-| `style` | *N/A* |
-| `visible` | *N/A* |
-| `zoom` | `zoom` |
+| `center` | `center` |
+| `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language`| `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `maptype` | `layer` and `style` ΓÇô For more information, see [Supported map styles]. |
+| `markers` | `pins` |
+| `path` | `path` |
+| `region` | *N/A* ΓÇô A geocoding related feature. Use the `countrySet` parameter when using the Azure Maps geocoding API. |
+| `scale` | *N/A* |
+| `size` | `width` and `height` ΓÇô Max size is 8192 x 8192. |
+| `style` | *N/A* |
+| `visible` | *N/A* |
+| `zoom` | `zoom` |
> [!NOTE] > In the Azure Maps tile system, tiles are twice the size of map tiles used in Google Maps. As such the zoom level value in Azure Maps will appear one zoom level closer in Azure Maps compared to Google Maps. To compensate for this difference, decrement the zoom level in the requests you are migrating. For more information, see [Render custom data on a raster map].
-In addition to being able to generate a static map image, the Azure Maps render service provides the ability to directly access map tiles in raster (PNG) and vector format:
+In addition to being able to generate a static map image, the Azure Maps render service enables direct access of map tiles in raster (PNG) and vector format:
-* **[Map tile]**: Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
-* **[Map imagery tile]**: Retrieve aerial and satellite imagery tiles.
+* [Get Map Static Image]: Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* [Get Map Tile]: Retrieve aerial and satellite imagery tiles.
> [!TIP] > Many Google Maps applications were switched from interactive map experiences to static map images a few years ago. This was done as a cost saving method. In Azure Maps, it is usually more cost effective to use the interactive map control in the Web SDK. The interactive map control charges based the number of tile loads. Map tiles in Azure Maps are large. Often, it takes only a few tiles to recreate the same map view as a static map. Map tiles are cached automatically by the browser. As such, the interactive map control often generates a fraction of a transaction when reproducing a static map view. Panning and zooming will load more tiles; however, there are options in the map control to disable this behavior. The interactive map control also provides a lot more visualization options than the static map services.
Add three pins with the label values '1', '2', and '3':
**Before: Google Maps**
-Add lines and polygon to a static map image using the `path` parameter in the URL. The `path` parameter takes in a style and a list of locations to be rendered on the map, as shown below:
+Add lines and polygon to a static map image using the `path` parameter in the URL. The `path` parameter takes in a style and a list of locations to be rendered on the map:
```text &path=pathStyles|pathLocation1|pathLocation2|...
Use other styles by adding extra `path` parameters to the URL with a different s
Path locations are specified with the `latitude1,longitude1|latitude2,longitude2|…` format. Paths can be encoded or contain addresses for points.
-Add path styles with the `optionName:value` format, separate multiple styles by the pipe (\|) characters. And, separate option names and values with a colon (:). Like this: `optionName1:value1|optionName2:value2`. The following style option names can be used to style paths in Google Maps:
+Add path styles with the `optionName:value` format, separate multiple styles by the pipe (\|) characters. Also separate option names and values with a colon (:). For example: `optionName1:value1|optionName2:value2`. The following style option names can be used to style paths in Google Maps:
* `color` ΓÇô The color of the path or polygon outline. Can be a 24-bit hex color (`0xrrggbb`), a 32-bit hex color (`0xrrggbbbaa`) or one of the following values: black, brown, green, purple, yellow, blue, gray, orange, red, white. * `fillColor` ΓÇô The color to fill the path area with (polygon). Can be a 24-bit hex color (`0xrrggbb`), a 32-bit hex color (`0xrrggbbbaa`) or one of the following values: black, brown, green, purple, yellow, blue, gray, orange, red, white.
Add a red line opacity and pixel thickness between the coordinates, in the URL p
Azure Maps provides the distance matrix API. Use this API to calculate the travel times and the distances between a set of locations, with a distance matrix. It's comparable to the distance matrix API in Google Maps.
-* **[Route matrix]**(/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
+* [Post Route Matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
> [!NOTE] > A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
For more information, see [best practices for routing].
Azure Maps provides an API for retrieving the time zone of a coordinate. The Azure Maps time zone API is comparable to the time zone API in Google Maps:
-* **[Time zone by coordinate]**(/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and receive the time zone details of the coordinate.
+* [Get Timezone By Coordinates]: Specify a coordinate and receive the time zone details of the coordinate.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
This table cross-references the Google Maps API parameters with the comparable A
In addition to this API, Azure Maps provides many time zone APIs. These APIs convert the time based on the names or the IDs of the time zone:
-* **[Time zone by ID]**: Returns current, historical, and future time zone information for the specified IANA time zone ID.
-* **[Time zone Enum IANA]**: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
-* **[Time zone Enum Windows]**: Returns a full list of Windows Time Zone IDs.
-* **[Time zone IANA version]**: Returns the current IANA version number used by Azure Maps.
-* **[Time zone Windows to IANA]**: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* [Get Timezone By ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* [Get Timezone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* [Get Timezone Enum Windows]: Returns a full list of Windows Time Zone IDs.
+* [Get Timezone IANA Version]: Returns the current IANA version number used by Azure Maps.
+* [Get Timezone Windows To IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Client libraries Azure Maps provides client libraries for the following programming languages:
-* JavaScript, TypeScript, Node.js ΓÇô [documentation] \| [npm package]
+* JavaScript, TypeScript, Node.js ΓÇô [Azure Maps services module] \| [npm package]
These Open-source client libraries are for other programming languages:
No resources to be cleaned up.
Learn more about Azure Maps REST > [!div class="nextstepaction"]
-> [Best practices for search](how-to-use-best-practices-for-search.md)
+> [Best practices for search]
-[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
[Authentication with Azure Maps]: azure-maps-authentication.md [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps services module]: how-to-use-services-module.md
[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
-[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
-[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
-[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
[best practices for routing]: how-to-use-best-practices-for-routing.md [best practices for search]: how-to-use-best-practices-for-search.md
-[Calculate route]: /rest/api/maps/route/getroutedirections
[Calculate routes and directions]: #calculate-routes-and-directions
-[calculating routable ranges]: /rest/api/maps/route/getrouterange
-[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
-[documentation]: how-to-use-services-module.md
[free account]: https://azure.microsoft.com/free/
-[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
-[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
-[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
+[Get Route Directions]: /rest/api/maps/route/get-route-directions
+[Get Route Range]: /rest/api/maps/route/get-route-range
+[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street
+[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse
+[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured
+[Get Search Address]: /rest/api/maps/search/get-search-address
+[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy
+[Get Search Nearby]: /rest/api/maps/search/get-search-nearby
+[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category
+[Get Search POI]: /rest/api/maps/search/get-search-poi
+[Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates
+[Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id
+[Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana
+[Get Timezone Enum Windows]: /rest/api/maps/timezone/get-timezone-enum-windows
+[Get Timezone IANA Version]: /rest/api/maps/timezone/get-timezone-iana-version
+[Get Timezone Windows To IANA]: /rest/api/maps/timezone/get-timezone-windows-to-iana
[GitHub project]: https://github.com/perfahlen/AzureMapsRestServices [Localization support in Azure Maps]: supported-languages.md
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
-[Map image render]: /rest/api/maps/render/getmapimagerytile
-[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Map tile]: /rest/api/maps/render-v2/get-map-tile
-[Nearby search]: /rest/api/maps/search/getsearchnearby
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[npm package]: https://www.npmjs.com/package/azure-maps-rest [NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit
-[POI category search]: /rest/api/maps/search/getsearchpoicategory
-[POI search]: /rest/api/maps/search/getsearchpoi
+[Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch
+[Post Route Matrix]: /rest/api/maps/route/post-route-matrix
+[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch
+[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch
+[Post Search Along Route]: /rest/api/maps/search/post-search-along-route
+[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch
+[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry
[Render custom data on a raster map]: how-to-render-custom-data.md [Render]: /rest/api/maps/render-v2/get-map-static-image [Reverse geocode a coordinate]: #reverse-geocode-a-coordinate
-[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
[Route]: /rest/api/maps/route
-[Search along route]: /rest/api/maps/search/postsearchalongroute
[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
[Search]: /rest/api/maps/search [Spatial operations]: /rest/api/maps/spatial
-[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported map styles]: supported-map-styles.md
[supported search categories]: supported-search-categories.md
-[supporting points]: /rest/api/maps/route/postroutedirections#supportingpoints
-[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
-[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
-[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
-[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
-[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
-[Time Zone]: /rest/api/maps/timezone
+[supporting points]: /rest/api/maps/route/post-route-directions#request-body
+[Timezone]: /rest/api/maps/timezone
[Traffic]: /rest/api/maps/traffic
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
A **road** map is a standard map that displays roads. It also displays natural a
**Applicable APIs:**
-* [Map image]
-* [Map tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Static Image]
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## blank and blank_accessible
The **blank** and **blank_accessible** map styles provide a blank canvas for vis
**Applicable APIs:**
-* Web SDK map control
+* [Web SDK map control]
## satellite
The **satellite** style is a combination of satellite and aerial imagery.
**Applicable APIs:**
-* [Satellite tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## satellite_road_labels
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## grayscale_dark
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* [Map image]
-* [Map tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Static Image]
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## grayscale_light
This map style is a hybrid of roads and labels overlaid on top of satellite and
![grayscale light map style](./media/supported-map-styles/grayscale-light.jpg) **Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## night
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## road_shaded_relief
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* [Map tile]
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Get Map Tile]
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## high_contrast_dark
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## high_contrast_light
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:**
-* Web SDK map control
-* Android map control
-* Power BI visual
+* [Web SDK map control]
+* [Android map control]
+* [Power BI visual]
## Map style accessibility
Learn about how to set a map style in Azure Maps:
> [!div class="nextstepaction"] > [Choose a map style]
-[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
-[Map image]: /rest/api/maps/render-v2/get-map-static-image
-[Map tile]: /rest/api/maps/render-v2/get-map-tile
-[Satellite tile]: /rest/api/maps/render/getmapimagerytilepreview
+[Android map control]: how-to-use-android-map-control-library.md
[Choose a map style]: choose-map-style.md
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
+[Power BI visual]: power-bi-visual-get-started.md
+[Web SDK map control]: how-to-use-map-control.md
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section outlines supported scenarios.
* [ASP.NET](./asp-net.md) * [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
* [ASP.NET Core](./asp-net-core.md) #### Client-side JavaScript SDK
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
A list of the latest [currently supported modules](https://github.com/microsoft/
* [User and page data](./javascript.md) * [Availability](./availability-overview.md) * Set up custom dependency tracking for [Java](opentelemetry-add-modify.md?tabs=java#add-custom-spans).
-* Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md).
+* Set up custom dependency tracking for [OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-dependency).
* [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) * See [data model](./data-model-complete.md) for Application Insights types and data model. * Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Now you can easily filter out in **Transaction Search** all the messages of a pa
The Azure Monitor Log Handler allows you to export Python logs to Azure Monitor.
-Instrument your application with the [OpenCensus Python SDK](./opencensus-python.md) for Azure Monitor.
+Instrument your application with the [OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) for Azure Monitor.
This example shows how to send a warning level log to Azure Monitor.
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
The following SDKs and features are unsupported for use with Azure AD authentica
- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br> Azure AD authentication is only available for Application Insights Java Agent greater than or equal to 3.2.0. - [ApplicationInsights JavaScript web SDK](javascript.md).-- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.
+- [Application Insights OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) with Python version 3.4 and 3.5.
- [Certificate/secret-based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use managed identities instead. - On-by-default codeless monitoring (for languages) for Azure App Service, Azure Virtual Machines/Azure Virtual Machine Scale Sets, and Azure Functions. - [Availability tests](availability-overview.md).
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
To instrument your Node.js application, use the [SDK](./nodejs.md).
### [Python](#tab/python)
-To monitor Python apps, use the [SDK](./opencensus-python.md).
+To monitor Python apps, use the [SDK](/previous-versions/azure/azure-monitor/app/opencensus-python).
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable monitoring for applications hosted on App Service:
* **Manually instrumenting the application through code** by installing the Application Insights SDK.
- This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
+ This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](/previous-versions/azure/azure-monitor/app/opencensus-python), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
If you need to make custom API calls to track events/dependencies not captured by default with autoinstrumentation monitoring, you need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
For information on how to set up an Application Insights SDK for code-based moni
- [Java](./opentelemetry-enable.md?tabs=java) - [JavaScript](./javascript.md) - [Node.js](./nodejs.md)-- [Python](./opencensus-python.md)
+- [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
### Codeless monitoring and Visual Studio resource creation
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and
* [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](../app/nodejs.md) * [JavaScript](./javascript.md#enable-distributed-tracing)
-* [Python](opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in the [Dependency auto-collection documentation](asp-net-dependencies.md#dependency-auto-collection).
The following pages consist of language-by-language guidance to enable and confi
In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open-source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open-source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/).
-For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](opencensus-python.md).
+For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](/previous-versions/azure/azure-monitor/app/opencensus-python).
The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html), [Go](https://godoc.org/go.opencensus.io), and various guides for using OpenCensus.
By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont
If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource. The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span.
When this code runs, the following prints in the console:
Notice that there's a `spanId` present for the log message that's within the span. The `spanId` is the same as that which belongs to the span named `hello`.
-You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](./opencensus-python.md#logs).
+You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](/previous-versions/azure/azure-monitor/app/opencensus-python#logs).
We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach:
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Throttling is a concern because it can lead to missed alerts. The condition to t
In summary, we recommend `GetMetric()` because it does pre-aggregation, it accumulates values from all the `Track()` calls, and sends a summary/aggregate once every minute. The `GetMetric()` method can significantly reduce the cost and performance overhead by sending fewer data points while still collecting all relevant information. > [!NOTE]
-> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#autocollected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics, but the metrics implementation is different.
+> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#autocollected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](/previous-versions/azure/azure-monitor/app/opencensus-python#metrics) to send custom metrics, but the metrics implementation is different.
## Get started with GetMetric
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Let's assume the input log message body is `User account with userId 123456xx fa
} } ```+
+## Frequently asked questions
+
+### Why doesn't the log processor process logs using TelemetryClient.trackTrace()?
+
+TelemetryClient.trackTrace() is part of the Application Insights Classic SDK bridge, and the log processors only work with the new [OpenTelemetry-based instrumentation](opentelemetry-enable.md).
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Some use cases:
Before you learn about telemetry processors, you should understand the terms *span* and *log*.
-A span is a type of telemetry that represent one of:
+A span is a type of telemetry that represents one of:
* An incoming request. * An outgoing dependency (for example, a remote call to another service).
The log processor modifies either the log message body or attributes of a log ba
### Update Log message body
-The `body` section requires the `fromAttributes` setting. The values from these attributes are used to create a new body, concatenated in the order that the configuration specifies. The processor will change the log body only if all of these attributes are present on the log.
+The `body` section requires the `fromAttributes` setting. The values from these attributes are used to create a new body, concatenated in the order that the configuration specifies. The processor changes the log body only if all of these attributes are present on the log.
The `separator` setting is optional. This setting is a string. It's specified to split values. > [!NOTE]
For more information, see [Telemetry processor examples](./java-standalone-telem
Metric filter are used to exclude some metrics in order to help control ingestion cost.
-Metric filters only support `exclude` criteria. Metrics that match its `exclude` criteria will not be exported.
+Metric filters only support `exclude` criteria. Metrics that match its `exclude` criteria won't be exported.
To configure this option, under `exclude`, specify the `matchType` one or more `metricNames`.
To configure this option, under `exclude`, specify the `matchType` one or more `
| `\Process(??APP_WIN32_PROC??)\Private Bytes` | default metrics | Sum of [MemoryMXBean.getHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getHeapMemoryUsage--) and [MemoryMXBean.getNonHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getNonHeapMemoryUsage--). | no | | `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | default metrics | `/proc/[pid]/io` Sum of bytes read and written by the process (diff since last reported). See [proc(5)](https://man7.org/linux/man-pages/man5/proc.5.html). | no | | `\Memory\Available Bytes` | default metrics | See [OperatingSystemMXBean.getFreePhysicalMemorySize()](https://docs.oracle.com/javase/7/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html#getFreePhysicalMemorySize()). | no |+
+## Frequently asked questions
+
+### Why doesn't the log processor process logs using TelemetryClient.trackTrace()?
+
+TelemetryClient.trackTrace() is part of the Application Insights Classic SDK bridge, and the log processors only work with the new [OpenTelemetry-based instrumentation](opentelemetry-enable.md).
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
> [!IMPORTANT] > Currently, you can enable monitoring for your Java apps running on Azure Kubernetes Service (AKS) without instrumenting your code by using the [Java standalone agent](./opentelemetry-enable.md?tabs=java).
-> While the solution to seamlessly enable application monitoring is in process for other languages, use the SDKs to monitor your apps running on AKS. Use [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](./opencensus-python.md).
+> While the solution to seamlessly enable application monitoring is in process for other languages, use the SDKs to monitor your apps running on AKS. Use [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](/previous-versions/azure/azure-monitor/app/opencensus-python).
## Application monitoring without instrumenting the code Currently, only Java lets you enable application monitoring without instrumenting the code. To monitor applications in other languages, use the SDKs.
For the applications in other languages, we currently recommend using the SDKs:
* [ASP.NET](./asp-net.md) * [Node.js](./nodejs.md) * [JavaScript](./javascript.md)
-* [Python](./opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
## Troubleshooting
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
- Title: Dependency Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs
-description: Monitor dependency calls for your Python apps via OpenCensus Python.
- Previously updated : 03/22/2023----
-# Track dependencies with OpenCensus Python
-
-> [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
-
-A dependency is an external component that is called by your application. Dependency data is collected using OpenCensus Python and its various integrations. The data is then sent to Application Insights under Azure Monitor as `dependencies` telemetry.
-
-First, instrument your Python application with latest [OpenCensus Python SDK](./opencensus-python.md).
-
-## In-process dependencies
-
-OpenCensus Python SDK for Azure Monitor allows you to send "in-process" dependency telemetry (information and logic that occurs within your application). In-process dependencies will have the `type` field as `INPROC` in analytics.
-
-```python
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-tracer = Tracer(exporter=AzureExporter(connection_string="InstrumentationKey=<your-ikey-here>"), sampler=ProbabilitySampler(1.0))
-
-with tracer.span(name='foo'): # <-- A dependency telemetry item will be sent for this span "foo"
- print('Hello, World!')
-```
-
-## Dependencies with "requests" integration
-
-Track your outgoing requests with the OpenCensus `requests` integration.
-
-Download and install `opencensus-ext-requests` from [PyPI](https://pypi.org/project/opencensus-ext-requests/) and add it to the trace integrations. Requests sent using the Python [requests](https://pypi.org/project/requests/) library will be tracked.
-
-```python
-import requests
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace import config_integration
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-config_integration.trace_integrations(['requests']) # <-- this line enables the requests integration
-
-tracer = Tracer(exporter=AzureExporter(connection_string="InstrumentationKey=<your-ikey-here>"), sampler=ProbabilitySampler(1.0))
-
-with tracer.span(name='parent'):
- response = requests.get(url='https://www.wikipedia.org/wiki/Rabbit') # <-- this request will be tracked
-```
-
-## Dependencies with "httplib" integration
-
-Track your outgoing requests with OpenCensus `httplib` integration.
-
-Download and install `opencensus-ext-httplib` from [PyPI](https://pypi.org/project/opencensus-ext-httplib/) and add it to the trace integrations. Requests sent using [http.client](https://docs.python.org/3.7/library/http.client.html) for Python3 or [httplib](https://docs.python.org/2/library/httplib.html) for Python2 will be tracked.
-
-```python
-import http.client as httplib
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace import config_integration
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-config_integration.trace_integrations(['httplib'])
-conn = httplib.HTTPConnection("www.python.org")
-
-tracer = Tracer(
- exporter=AzureExporter(),
- sampler=ProbabilitySampler(1.0)
-)
-
-conn.request("GET", "http://www.python.org", "", {})
-response = conn.getresponse()
-conn.close()
-```
-
-## Dependencies with "django" integration
-
-Track your outgoing Django requests with the OpenCensus `django` integration.
-
-> [!NOTE]
-> The only outgoing Django requests that are tracked are calls made to a database. For requests made to the Django application, see [incoming requests](./opencensus-python-request.md#track-django-applications).
-
-Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/) and add the following line to the `MIDDLEWARE` section in the Django `settings.py` file.
-
-```python
-MIDDLEWARE = [
- ...
- 'opencensus.ext.django.middleware.OpencensusMiddleware',
-]
-```
-
-Additional configuration can be provided, read [customizations](https://github.com/census-instrumentation/opencensus-python#customization) for a complete reference.
-
-```python
-OPENCENSUS = {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>"
- )''',
- }
-}
-```
-
-You can find a Django sample application that uses dependencies in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
-
-## Dependencies with "mysql" integration
-
-Track your MYSQL dependencies with the OpenCensus `mysql` integration. This integration supports the [mysql-connector](https://pypi.org/project/mysql-connector-python/) library.
-
-Download and install `opencensus-ext-mysql` from [PyPI](https://pypi.org/project/opencensus-ext-mysql/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['mysql'])
-```
-
-## Dependencies with "pymysql" integration
-
-Track your PyMySQL dependencies with the OpenCensus `pymysql` integration.
-
-Download and install `opencensus-ext-pymysql` from [PyPI](https://pypi.org/project/opencensus-ext-pymysql/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['pymysql'])
-```
-
-## Dependencies with "postgresql" integration
-
-Track your PostgreSQL dependencies with the OpenCensus `postgresql` integration. This integration supports the [psycopg2](https://pypi.org/project/psycopg2/) library.
-
-Download and install `opencensus-ext-postgresql` from [PyPI](https://pypi.org/project/opencensus-ext-postgresql/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['postgresql'])
-```
-
-## Dependencies with "pymongo" integration
-
-Track your MongoDB dependencies with the OpenCensus `pymongo` integration. This integration supports the [pymongo](https://pypi.org/project/pymongo/) library.
-
-Download and install `opencensus-ext-pymongo` from [PyPI](https://pypi.org/project/opencensus-ext-pymongo/) and add the following lines to your code.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['pymongo'])
-```
-
-### Dependencies with "sqlalchemy" integration
-
-Track your dependencies using SQLAlchemy using OpenCensus `sqlalchemy` integration. This integration tracks the usage of the [sqlalchemy](https://pypi.org/project/SQLAlchemy/) package, regardless of the underlying database.
-
-```python
-from opencensus.trace import config_integration
-
-config_integration.trace_integrations(['sqlalchemy'])
-```
-
-## Next steps
-
-* [Application Map](./app-map.md)
-* [Availability](./availability-overview.md)
-* [Search](./diagnostic-search.md)
-* [Log (Analytics) query](../logs/log-query-overview.md)
-* [Transaction diagnostics](./transaction-diagnostics.md)
-
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
- Title: Incoming request tracking in Application Insights with OpenCensus Python | Microsoft Docs
-description: Monitor request calls for your Python apps via OpenCensus Python.
- Previously updated : 06/23/2023----
-# Track incoming requests with OpenCensus Python
-
-> [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
-
-OpenCensus Python and its integrations collect incoming request data. You can track incoming request data sent to your web applications built on top of the popular web frameworks Django, Flask, and Pyramid. Application Insights receives the data as `requests` telemetry.
-
-First, instrument your Python application with the latest [OpenCensus Python SDK](./opencensus-python.md).
-
-## Track Django applications
-
-1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/). Instrument your application with the `django` middleware. Incoming requests sent to your Django application are tracked.
-
-1. Include `opencensus.ext.django.middleware.OpencensusMiddleware` in your `settings.py` file under `MIDDLEWARE`.
-
- ```python
- MIDDLEWARE = (
- ...
- 'opencensus.ext.django.middleware.OpencensusMiddleware',
- ...
- )
- ```
-
-1. Make sure AzureExporter is configured properly in your `settings.py` under `OPENCENSUS`. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
-
- ```python
- OPENCENSUS = {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>"
- )''',
- 'EXCLUDELIST_PATHS': ['https://example.com'], < These sites will not be traced if a request is sent to it.
- }
- }
- ```
-
-You can find a Django sample application in the [Azure Monitor OpenCensus Python samples repository](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
-
-## Track Flask applications
-
-1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/). Instrument your application with the `flask` middleware. Incoming requests sent to your Flask application are tracked.
-
- ```python
-
- from flask import Flask
- from opencensus.ext.azure.trace_exporter import AzureExporter
- from opencensus.ext.flask.flask_middleware import FlaskMiddleware
- from opencensus.trace.samplers import ProbabilitySampler
-
- app = Flask(__name__)
- middleware = FlaskMiddleware(
- app,
- exporter=AzureExporter(connection_string="InstrumentationKey=<your-ikey-here>"),
- sampler=ProbabilitySampler(rate=1.0),
- )
-
- @app.route('/')
- def hello():
- return 'Hello World!'
-
- if __name__ == '__main__':
- app.run(host='localhost', port=8080, threaded=True)
-
- ```
-
-1. You can also configure your `flask` application through `app.config`. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
-
- ```python
- app.config['OPENCENSUS'] = {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1.0)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>",
- )''',
- 'EXCLUDELIST_PATHS': ['https://example.com'], < These sites will not be traced if a request is sent to it.
- }
- }
- ```
-
- > [!NOTE]
- > To run Flask under uWSGI in a Docker environment, you must first add `lazy-apps = true` to the uWSGI configuration file (uwsgi.ini). For more information, see the [issue description](https://github.com/census-instrumentation/opencensus-python/issues/660).
-
-You can find a Flask sample application that tracks requests in the [Azure Monitor OpenCensus Python samples repository](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/flask_sample).
-
-## Track Pyramid applications
-
-1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-pyramid/). Instrument your application with the `pyramid` tween. Incoming requests sent to your Pyramid application are tracked.
-
- ```python
- def main(global_config, **settings):
- config = Configurator(settings=settings)
-
- config.add_tween('opencensus.ext.pyramid'
- '.pyramid_middleware.OpenCensusTweenFactory')
- ```
-
-1. You can configure your `pyramid` tween directly in the code. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
-
- ```python
- settings = {
- 'OPENCENSUS': {
- 'TRACE': {
- 'SAMPLER': 'opencensus.trace.samplers.ProbabilitySampler(rate=1.0)',
- 'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter(
- connection_string="InstrumentationKey=<your-ikey-here>",
- )''',
- 'EXCLUDELIST_PATHS': ['https://example.com'], < These sites will not be traced if a request is sent to it.
- }
- }
- }
- config = Configurator(settings=settings)
- ```
-
-## Track FastAPI applications
-
-1. The following dependencies are required:
- - [fastapi](https://pypi.org/project/fastapi/)
- - [uvicorn](https://pypi.org/project/uvicorn/)
-
- In a production setting, we recommend that you deploy [uvicorn with gunicorn](https://www.uvicorn.org/deployment/#gunicorn).
-
-2. Download and install `opencensus-ext-fastapi` from [PyPI](https://pypi.org/project/opencensus-ext-fastapi/).
-
- `pip install opencensus-ext-fastapi`
-
-3. Instrument your application with the `fastapi` middleware.
-
- ```python
- from fastapi import FastAPI
- from opencensus.ext.fastapi.fastapi_middleware import FastAPIMiddleware
-
- app = FastAPI(__name__)
- app.add_middleware(FastAPIMiddleware)
-
- @app.get('/')
- def hello():
- return 'Hello World!'
- ```
-
-4. Run your application. Calls made to your FastAPI application should be automatically tracked. Telemetry should be logged directly to Azure Monitor.
-
-## Next steps
-
-* [Application Map](./app-map.md)
-* [Availability](./availability-overview.md)
-* [Search](./diagnostic-search.md)
-* [Log Analytics query](../logs/log-query-overview.md)
-* [Transaction diagnostics](./transaction-diagnostics.md)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
- Title: Monitor Python applications with Azure Monitor | Microsoft Docs
-description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor.
- Previously updated : 08/11/2023----
-# Set up Azure Monitor for your Python application
-
-> [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
-
-Azure Monitor supports distributed tracing, metric collection, and logging of Python applications.
-
-Microsoft's supported solution for tracking and exporting data for your Python applications is through the [OpenCensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
-
-Microsoft doesn't recommend using any other telemetry SDKs for Python as a telemetry solution because they're unsupported.
-
-OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). We continue to recommend OpenCensus while OpenTelemetry gradually matures.
-
-## Prerequisites
-
-You need an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
--
-## Introducing OpenCensus Python SDK
-
-[OpenCensus](https://opencensus.io) is a set of open-source libraries to allow collection of distributed tracing, metrics, and logging telemetry. By using [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you can send this collected telemetry to Application Insights. This article walks you through the process of setting up OpenCensus and Azure Monitor exporters for Python to send your monitoring data to Azure Monitor.
-
-## Instrument with OpenCensus Python SDK with Azure Monitor exporters
-
-Install the OpenCensus Azure Monitor exporters:
-
-```console
-python -m pip install opencensus-ext-azure
-```
-
-The SDK uses three Azure Monitor exporters to send different types of telemetry to Azure Monitor. They're `trace`, `metrics`, and `logs`. For more information on these telemetry types, see the [Data platform overview](../data-platform.md). Use the following instructions to send these telemetry types via the three exporters.
-
-## Telemetry type mappings
-
-OpenCensus maps the following exporters to the types of telemetry that you see in Azure Monitor.
-
-| Pillar of observability | Telemetry type in Azure Monitor | Explanation |
-|-||--|
-| Logs | Traces, exceptions, customEvents | Log telemetry, exception telemetry, event telemetry |
-| Metrics | customMetrics, performanceCounters | Custom metrics performance counters |
-| Tracing | Requests dependencies | Incoming requests, outgoing requests |
-
-### Logs
-
-1. First, let's generate some local log data.
-
- ```python
-
- import logging
-
- logger = logging.getLogger(__name__)
-
- def main():
- """Generate random log data."""
- for num in range(5):
- logger.warning(f"Log Entry - {num}")
-
- if __name__ == "__main__":
- main()
- ```
-
-1. A log entry is emitted for each number in the range.
-
- ```output
- Log Entry - 0
- Log Entry - 1
- Log Entry - 2
- Log Entry - 3
- Log Entry - 4
- ```
-
-1. We want to see this log data to Azure Monitor. You can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. You may also pass the connection_string directly into the `AzureLogHandler`, but connection strings shouldn't be added to version control.
-
- ```shell
- APPLICATIONINSIGHTS_CONNECTION_STRING=<appinsights-connection-string>
- ```
-
- We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
-
- ```python
- import logging
- from opencensus.ext.azure.log_exporter import AzureLogHandler
-
- logger = logging.getLogger(__name__)
- logger.addHandler(AzureLogHandler())
-
- # Alternatively manually pass in the connection_string
- # logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
-
- """Generate random log data."""
- for num in range(5):
- logger.warning(f"Log Entry - {num}")
- ```
-
-1. The exporter sends log data to Azure Monitor. You can find the data under `traces`.
-
- In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md).
-
- > [!NOTE]
- > The root logger is configured with the level of `warning`. That means any logs that you send that have less severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [Logging documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel).
-
-1. You can also add custom properties to your log messages in the `extra` keyword argument by using the `custom_dimensions` field. These properties appear as key-value pairs in `customDimensions` in Azure Monitor.
- > [!NOTE]
- > For this feature to work, you need to pass a dictionary to the `custom_dimensions` field. If you pass arguments of any other type, the logger ignores them.
-
- ```python
- import logging
-
- from opencensus.ext.azure.log_exporter import AzureLogHandler
-
- logger = logging.getLogger(__name__)
- logger.addHandler(AzureLogHandler())
- # Alternatively manually pass in the connection_string
- # logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
-
- properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
-
- # Use properties in logging statements
- logger.warning('action', extra=properties)
- ```
-
-> [!NOTE]
-> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. To learn more, see [Statsbeat in Application Insights](./statsbeat.md).
-
-#### Configure logging for Django applications
-
-You can configure logging explicitly in your application code like the preceding for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django site's settings configuration, typically `settings.py`.
-
-For information on how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on how to configure logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/).
-
-```json
-LOGGING = {
- "handlers": {
- "azure": {
- "level": "DEBUG",
- "class": "opencensus.ext.azure.log_exporter.AzureLogHandler",
- "connection_string": "<appinsights-connection-string>",
- },
- "console": {
- "level": "DEBUG",
- "class": "logging.StreamHandler",
- "stream": sys.stdout,
- },
- },
- "loggers": {
- "logger_name": {"handlers": ["azure", "console"]},
- },
-}
-```
-
-Be sure you use the logger with the same name as the one specified in your configuration.
-
-```python
-# views.py
-
-import logging
-from django.shortcuts import request
-
-logger = logging.getLogger("logger_name")
-logger.warning("this will be tracked")
-
-```
-
-#### Send exceptions
-
-OpenCensus Python doesn't automatically track and send `exception` telemetry. It's sent through `AzureLogHandler` by using exceptions through the Python logging library. You can add custom properties like you do with normal logging.
-
-```python
-import logging
-
-from opencensus.ext.azure.log_exporter import AzureLogHandler
-
-logger = logging.getLogger(__name__)
-logger.addHandler(AzureLogHandler())
-# Alternatively, manually pass in the connection_string
-# logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
-
-properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
-
-# Use properties in exception logs
-try:
- result = 1 / 0 # generate a ZeroDivisionError
-except Exception:
- logger.exception('Captured an exception.', extra=properties)
-```
-
-Because you must log exceptions explicitly, it's up to you how to log unhandled exceptions. OpenCensus doesn't place restrictions on how to do this logging, but you must explicitly log exception telemetry.
-
-#### Send events
-
-You can send `customEvent` telemetry in exactly the same way that you send `trace` telemetry, except by using `AzureEventHandler` instead.
-
-```python
-import logging
-from opencensus.ext.azure.log_exporter import AzureEventHandler
-
-logger = logging.getLogger(__name__)
-logger.addHandler(AzureEventHandler())
-# Alternatively manually pass in the connection_string
-# logger.addHandler(AzureEventHandler(connection_string=<appinsights-connection-string>))
-
-logger.setLevel(logging.INFO)
-logger.info('Hello, World!')
-```
-
-#### Sampling
-
-For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
-
-#### Log correlation
-
-For information on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](distributed-tracing-telemetry-correlation.md#log-correlation).
-
-#### Modify telemetry
-
-For information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
-### Metrics
-
-OpenCensus.stats supports four aggregation methods but provides partial support for Azure Monitor:
--- **Count**: The count of the number of measurement points. The value is cumulative, can only increase, and resets to 0 on restart.-- **Sum**: A sum up of the measurement points. The value is cumulative, can only increase, and resets to 0 on restart.-- **LastValue**: Keeps the last recorded value and drops everything else.-- **Distribution**: The Azure exporter doesn't support the histogram distribution of the measurement points.-
-### Count aggregation example
-
-1. First, let's generate some local metric data. We create a metric to track the number of times the user selects the **Enter** key.
-
- ```python
-
- from datetime import datetime
- from opencensus.stats import aggregation as aggregation_module
- from opencensus.stats import measure as measure_module
- from opencensus.stats import stats as stats_module
- from opencensus.stats import view as view_module
- from opencensus.tags import tag_map as tag_map_module
-
- stats = stats_module.stats
- view_manager = stats.view_manager
- stats_recorder = stats.stats_recorder
-
- prompt_measure = measure_module.MeasureInt("prompts",
- "number of prompts",
- "prompts")
- prompt_view = view_module.View("prompt view",
- "number of prompts",
- [],
- prompt_measure,
- aggregation_module.CountAggregation())
- view_manager.register_view(prompt_view)
- mmap = stats_recorder.new_measurement_map()
- tmap = tag_map_module.TagMap()
-
- def main():
- for _ in range(4):
- mmap.measure_int_put(prompt_measure, 1)
- mmap.record(tmap)
- metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow()))
- print(metrics[0].time_series[0].points[0])
-
- if __name__ == "__main__":
- main()
- ```
-
-1. Metrics are created to track many times. With each entry, the value is incremented and the metric information appears in the console. The information includes the current value and the current time stamp when the metric was updated.
-
- ```output
- Point(value=ValueLong(5), timestamp=2019-10-09 20:58:04.930426)
- Point(value=ValueLong(6), timestamp=2019-10-09 20:58:05.170167)
- Point(value=ValueLong(7), timestamp=2019-10-09 20:58:05.438614)
- Point(value=ValueLong(7), timestamp=2019-10-09 20:58:05.834216)
- ```
-
-1. Entering values is helpful for demonstration purposes, but we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
-
- ```python
- from datetime import datetime
- from opencensus.ext.azure import metrics_exporter
- from opencensus.stats import aggregation as aggregation_module
- from opencensus.stats import measure as measure_module
- from opencensus.stats import stats as stats_module
- from opencensus.stats import view as view_module
- from opencensus.tags import tag_map as tag_map_module
-
- stats = stats_module.stats
- view_manager = stats.view_manager
- stats_recorder = stats.stats_recorder
-
- prompt_measure = measure_module.MeasureInt("prompts",
- "number of prompts",
- "prompts")
- prompt_view = view_module.View("prompt view",
- "number of prompts",
- [],
- prompt_measure,
- aggregation_module.CountAggregation())
- view_manager.register_view(prompt_view)
- mmap = stats_recorder.new_measurement_map()
- tmap = tag_map_module.TagMap()
-
- exporter = metrics_exporter.new_metrics_exporter()
- # Alternatively manually pass in the connection_string
- # exporter = metrics_exporter.new_metrics_exporter(connection_string='<appinsights-connection-string>')
-
- view_manager.register_exporter(exporter)
-
- def main():
- for _ in range(10):
- input("Press enter.")
- mmap.measure_int_put(prompt_measure, 1)
- mmap.record(tmap)
- metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow()))
- print(metrics[0].time_series[0].points[0])
-
- if __name__ == "__main__":
- main()
- ```
-
-1. The exporter sends metric data to Azure Monitor at a fixed interval. You must set this value to 60 seconds as Application Insights backend assumes aggregation of metrics points on a 60-second time interval. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The data is cumulative, can only increase, and resets to 0 on restart.
-
- You can find the data under `customMetrics`, but the `customMetrics` properties `valueCount`, `valueSum`, `valueMin`, `valueMax`, and `valueStdDev` aren't effectively used.
-
-### Set custom dimensions in metrics
-
-The OpenCensus Python SDK allows you to add custom dimensions to your metrics telemetry by using `tags`, which are like a dictionary of key-value pairs.
-
-1. Insert the tags that you want to use into the tag map. The tag map acts like a sort of "pool" of all available tags you can use.
-
- ```python
- ...
- tmap = tag_map_module.TagMap()
- tmap.insert("url", "http://example.com")
- ...
- ```
-
-1. For a specific `View`, specify the tags you want to use when you're recording metrics with that view via the tag key.
-
- ```python
- ...
- prompt_view = view_module.View("prompt view",
- "number of prompts",
- ["url"], # <-- A sequence of tag keys used to specify which tag key/value to use from the tag map
- prompt_measure,
- aggregation_module.CountAggregation())
- ...
- ```
-
-1. Be sure to use the tag map when you're recording in the measurement map. The tag keys that are specified in the `View` must be found in the tag map used to record.
-
- ```python
- ...
- mmap = stats_recorder.new_measurement_map()
- mmap.measure_int_put(prompt_measure, 1)
- mmap.record(tmap) # <-- pass the tag map in here
- ...
- ```
-
-1. Under the `customMetrics` table, all metric records emitted by using `prompt_view` have custom dimensions `{"url":"http://example.com"}`.
-
-1. To produce tags with different values by using the same keys, create new tag maps for them.
-
- ```python
- ...
- tmap = tag_map_module.TagMap()
- tmap2 = tag_map_module.TagMap()
- tmap.insert("url", "http://example.com")
- tmap2.insert("url", "https://www.wikipedia.org/wiki/")
- ...
- ```
-
-#### Performance counters
-
-By default, the metrics exporter sends a set of performance counters to Azure Monitor. You can disable this capability by setting the `enable_standard_metrics` flag to `False` in the constructor of the metrics exporter.
-
-```python
-...
-exporter = metrics_exporter.new_metrics_exporter(
- enable_standard_metrics=False,
- )
-...
-```
-
-The following performance counters are currently sent:
--- Available Memory (bytes)-- CPU Processor Time (percentage)-- Incoming Request Rate (per second)-- Incoming Request Average Execution Time (milliseconds)-- Process CPU Usage (percentage)-- Process Private Bytes (bytes)-
-You should be able to see these metrics in `performanceCounters`. For more information, see [Performance counters](./performance-counters.md).
-
-#### Modify telemetry
-
-For information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
-### Tracing
-
-> [!NOTE]
-> In OpenCensus, `tracing` refers to [distributed tracing](./distributed-tracing.md). The `AzureExporter` parameter sends `requests` and `dependency` telemetry to Azure Monitor.
-
-1. First, let's generate some trace data locally. In Python IDLE, or your editor of choice, enter the following code:
-
- ```python
- from opencensus.trace.samplers import ProbabilitySampler
- from opencensus.trace.tracer import Tracer
-
- tracer = Tracer(sampler=ProbabilitySampler(1.0))
-
- def main():
- with tracer.span(name="test") as span:
- for value in range(5):
- print(value)
--
- if __name__ == "__main__":
- main()
- ```
-
-1. With each entry, the value is printed to the shell. The OpenCensus Python module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/).
-
- ```output
- 0
- [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='15ac5123ac1f6847', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:22.805429Z', end_time='2019-06-27T18:21:44.933405Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]
- 1
- [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='2e512f846ba342de', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:44.933405Z', end_time='2019-06-27T18:21:46.156787Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]
- 2
- [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='f3f9f9ee6db4740a', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:46.157732Z', end_time='2019-06-27T18:21:47.269583Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]
- ```
-
-1. Viewing the output is helpful for demonstration purposes, but we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample:
-
- ```python
- from opencensus.ext.azure.trace_exporter import AzureExporter
- from opencensus.trace.samplers import ProbabilitySampler
- from opencensus.trace.tracer import Tracer
-
- tracer = Tracer(
- exporter=AzureExporter(),
- sampler=ProbabilitySampler(1.0),
- )
- # Alternatively manually pass in the connection_string
- # exporter = AzureExporter(
- # connection_string='<appinsights-connection-string>',
- # ...
- # )
-
- def main():
- with tracer.span(name="test") as span:
- for value in range(5):
- print(value)
-
- if __name__ == "__main__":
- main()
- ```
-
-1. Now when you run the Python script, only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`.
-
- For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md). For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md).
-
-#### Sampling
-
-For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling.md#configuring-fixed-rate-sampling-for-opencensus-python-applications).
-
-#### Trace correlation
-
-For more information on telemetry correlation in your trace data, see OpenCensus Python [telemetry correlation](distributed-tracing-telemetry-correlation.md#telemetry-correlation-in-opencensus-python).
-
-#### Modify telemetry
-
-For more information on how to modify tracked telemetry before it's sent to Azure Monitor, see OpenCensus Python [telemetry processors](./api-filtering-sampling.md#opencensus-python-telemetry-processors).
-
-## Configure Azure Monitor exporters
-
-As shown, there are three different Azure Monitor exporters that support OpenCensus. Each one sends different types of telemetry to Azure Monitor. To see what types of telemetry each exporter sends, see the following table.
-
-Each exporter accepts the same arguments for configuration, passed through the constructors. You can see information about each one here:
-
-|Exporter telemetry|Description|
-|:|:|
-`connection_string`| The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.|
-`credential`| Credential class used by Azure Active Directory authentication. See the "Authentication" section that follows.|
-`enable_standard_metrics`| Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.|
-`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`. For metrics, you MUST set it to 60 seconds or else your metric aggregations don't make sense in the metrics explorer.|
-`grace_period`| Used to specify the timeout for shutdown of exporters in seconds. Defaults to `5s`.|
-`instrumentation_key`| The instrumentation key used to connect to your Azure Monitor resource.|
-`logging_sampling_rate`| Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to `1.0`.|
-`max_batch_size`| Specifies the maximum size of telemetry that's exported at once.|
-`proxies`| Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).|
-`storage_path`| A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is `$USER` + `.opencensus` + `.azure` + `python-file-name`.|
-`timeout`| Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to `10s`.|
-
-## Integrate with Azure Functions
-
-To capture custom telemetry in Azure Functions environments, use the OpenCensus Python Azure Functions [extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure/tree/main/extensions/functions#opencensus-python-azure-functions-extension). For more information, see the [Azure Functions Python developer guide](../../azure-functions/functions-reference-python.md#log-custom-telemetry).
-
-## Authentication (preview)
-
-> [!NOTE]
-> The authentication feature is available starting from `opencensus-ext-azure` v1.1b0.
-
-Each of the Azure Monitor exporters supports configuration of securely sending telemetry payloads via OAuth authentication with Azure Active Directory. For more information, see the [Authentication documentation](./azure-ad-authentication.md).
-
-## View your data with queries
-
-You can view the telemetry data that was sent from your application through the **Logs (Analytics)** tab.
-
-![Screenshot of the Overview pane with the Logs (Analytics) tab selected.](./media/opencensus-python/0010-logs-query.png)
-
-In the list under **Active**:
--- For telemetry sent with the Azure Monitor trace exporter, incoming requests appear under `requests`. Outgoing or in-process requests appear under `dependencies`.-- For telemetry sent with the Azure Monitor metrics exporter, sent metrics appear under `customMetrics`.-- For telemetry sent with the Azure Monitor logs exporter, logs appear under `traces`. Exceptions appear under `exceptions`.-
-For more information about how to use queries and logs, see [Logs in Azure Monitor](../logs/data-platform-logs.md).
-
-## Learn more about OpenCensus for Python
-
-* [OpenCensus Python on GitHub](https://github.com/census-instrumentation/opencensus-python)
-* [Customization](https://github.com/census-instrumentation/opencensus-python/blob/master/README.rst#customization)
-* [Azure Monitor exporters on GitHub](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
-* [OpenCensus integrations](https://github.com/census-instrumentation/opencensus-python#extensions)
-* [Azure Monitor sample applications](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor)
-
-## Troubleshooting
--
-## Release Notes
-
-For the latest release notes, see [Python Azure Monitor Exporter](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/CHANGELOG.md)
-
-Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements.
-
-## Next steps
-
-* To enable usage experiences, [enable web or browser user monitoring](javascript.md)
-* [Track incoming requests](./opencensus-python-dependency.md).
-* [Track outgoing requests](./opencensus-python-request.md).
-* Check out the [Application map](./app-map.md).
-* Learn how to do [End-to-end performance monitoring](../app/tutorial-performance.md).
-
-### Alerts
-
-* [Availability overview](./availability-overview.md): Create tests to make sure your site is visible on the web.
-* [Smart diagnostics](../alerts/proactive-diagnostics.md): These tests run automatically, so you don't have to do anything to set them up. They tell you if your app has an unusual rate of failed requests.
-* [Metric alerts](../alerts/alerts-log.md): Set alerts to warn you if a metric crosses a threshold. You can set them on custom metrics that you code into your app.
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
const credential = new ManagedIdentityCredential();
// Create a new AzureMonitorOpenTelemetryOptions object and set the credential property to the credential object. const options: AzureMonitorOpenTelemetryOptions = {
- credential: credential
+ azureMonitorExporterOptions: {
+ credential: credential
+ }
}; // Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Select your enablement approach:
- [ASP.NET](./asp-net.md) - [ASP.NET Core](./asp-net-core.md) - [Node.js](./nodejs.md)
- - [Python](./opencensus-python.md)
+ - [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
- [JavaScript: Web](./javascript.md) - [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md)
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
The newer SDKs ([Application Insights 2.7](https://www.nuget.org/packages/Micros
For the SDKs that don't implement pre-aggregation (that is, older versions of Application Insights SDKs or for browser instrumentation), the Application Insights back end still populates the new metrics by aggregating the events received by the Application Insights event collection endpoint. Although you don't benefit from the reduced volume of data transmitted over the wire, you can still use the pre-aggregated metrics and experience better performance and support of the near real time dimensional alerting with SDKs that don't pre-aggregate metrics during collection.
-The collection endpoint pre-aggregates events before ingestion sampling. For this reason, [ingestion sampling](./sampling.md) will never affect the accuracy of pre-aggregated metrics, regardless of the SDK version you use with your application.
+The collection endpoint pre-aggregates events before ingestion sampling. For this reason, [ingestion sampling](./sampling.md) never affects the accuracy of pre-aggregated metrics, regardless of the SDK version you use with your application.
### SDK supported pre-aggregated metrics table
The collection endpoint pre-aggregates events before ingestion sampling. For thi
| .NET Core and .NET Framework | Supported (V2.13.1+)| Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Supported (V2.7.2+) via [GetMetric](get-metric.md) | | Java | Not supported | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not supported | | Node.js | Supported (V2.0.0+) | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not supported |
-| Python | Not supported | Supported | Partially supported via [OpenCensus.stats](opencensus-python.md#metrics) |
+| Python | Not supported | Supported | Partially supported via [OpenCensus.stats](/previous-versions/azure/azure-monitor/app/opencensus-python#metrics) |
> [!NOTE]
-> The metrics implementation for Python by using OpenCensus.stats is different from GetMetric. For more information, see the [Python documentation on metrics](./opencensus-python.md#metrics).
+> The metrics implementation for Python by using OpenCensus.stats is different from GetMetric. For more information, see the [Python documentation on metrics](/previous-versions/azure/azure-monitor/app/opencensus-python#metrics).
### Codeless supported pre-aggregated metrics table
The collection endpoint pre-aggregates events before ingestion sampling. For thi
## Use pre-aggregation with Application Insights custom metrics
-You can use pre-aggregation with custom metrics. The two main benefits are the ability to configure and alert on a dimension of a custom metric and reducing the volume of data sent from the SDK to the Application Insights collection endpoint.
+You can use pre-aggregation with custom metrics. The two main benefits are:
-There are several [ways of sending custom metrics from the Application Insights SDK](./api-custom-events-metrics.md). If your version of the SDK offers [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric), these methods are the preferred way of sending custom metrics. In this case, pre-aggregation happens inside the SDK. This approach reduces the volume of data stored in Azure and also the volume of data transmitted from the SDK to Application Insights. Otherwise, use the [trackMetric](./api-custom-events-metrics.md#trackmetric) method, which will pre-aggregate metric events during data ingestion.
+- The ability to configure and alert on a dimension of a custom metric
+- Reduce the volume of data sent from the SDK to the Application Insights collection endpoint
+
+There are several [ways of sending custom metrics from the Application Insights SDK](./api-custom-events-metrics.md). If your version of the SDK offers [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric), these methods are the preferred way of sending custom metrics. In this case, pre-aggregation happens inside the SDK. This approach reduces the volume of data stored in Azure and also the volume of data transmitted from the SDK to Application Insights. Otherwise, use the [trackMetric](./api-custom-events-metrics.md#trackmetric) method, which pre-aggregates metric events during data ingestion.
## Custom metrics dimensions and pre-aggregation
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
By default no sampling is enabled in the Java autoinstrumentation and SDK. Curre
### Configuring fixed-rate sampling for OpenCensus Python applications
-Instrument your application with the latest [OpenCensus Azure Monitor exporters](./opencensus-python.md).
+Instrument your application with the latest [OpenCensus Azure Monitor exporters](/previous-versions/azure/azure-monitor/app/opencensus-python).
> [!NOTE] > Fixed-rate sampling is not available for the metrics exporter. This means custom metrics are the only types of telemetry where sampling can NOT be configured. The metrics exporter will send all telemetry that it tracks.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Get started at development time with:
* [ASP.NET Core](./asp-net-core.md) * [Java](./opentelemetry-enable.md?tabs=java) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
+* [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
To enable monitoring for an application, you must decide whether you'll use code
- [.NET console applications](app/console.md) - [Java](app/opentelemetry-enable.md?tabs=java) - [Node.js](app/nodejs.md)-- [Python](app/opencensus-python.md)
+- [Python](/previous-versions/azure/azure-monitor/app/opencensus-python)
- [Other platforms](app/app-insights-overview.md#supported-languages) ### Configure availability testing
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
This applies to the scenario where you have already enabled container insights f
>* The configuration change can take a few minutes to complete before it takes effect. All ama-logs pods in the cluster will restart. >* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time.
-## Multi-line logging in Container Insights (preview)
+## Multi-line logging in Container Insights
Azure Monitor container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits.
-Additionally, the feature also adds support for .NET and Go stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table.
+Additionally, the feature also adds support for .NET, Go, Python and Java stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table.
+
+Below are two screenshots which demonstrate Multi-line logging at work for Go exception stack trace:
+
+Multi-line logging disabled scenario:
+
+![Screenshot that shows Multi-line logging disabled.](./media/container-insights-logging-v2/multi-line-disabled-go.png)
+
+Multi-line logging enabled scenario:
+
+[ ![Screenshot that shows Multi-line enabled.](./media/container-insights-logging-v2/multi-line-enabled-go.png) ](./media/container-insights-logging-v2/multi-line-enabled-go.png#lightbox)
+
+Similarly, below screenshots depict Multi-line logging enabled scenarios for Java and Python stack traces:
+
+For Java:
+
+[ ![Screenshot that shows Multi-line enabled for Java](./media/container-insights-logging-v2/multi-line-enabled-java.png) ](./media/container-insights-logging-v2/multi-line-enabled-java.png#lightbox)
+
+For Python:
+
+[ ![Screenshot that shows Multi-line enabled for Python](./media/container-insights-logging-v2/multi-line-enabled-python.png) ](./media/container-insights-logging-v2/multi-line-enabled-python.png#lightbox)
### Pre-requisites
Multi-line logging is a preview feature and can be enabled by setting **enabled*
[log_collection_settings.enable_multiline_logs] # fluent-bit based multiline log collection for go (stacktrace), dotnet (stacktrace) # if enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line)
-enabled = "true"
+ enabled = "true"
``` ## Next steps
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
Your application sends data to a [data collection endpoint (DCE)](../essentials/
You can modify the target table and workspace by modifying the DCR without any change to the API call or source data. > [!NOTE] > To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md).
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
To gain more understanding of your usage and costs, create exports using Cost An
These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter and a few more fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the Cost Analytics experiences in the portal.
+The usage export has both the cost for your usage, and the number of units of usage. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+ For instance, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show 1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention),
To investigate your Application Insights usage more deeply, open the **Metrics**
## View data allocation benefits
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details.
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details as described above.
Open the exported usage spreadsheet and filter the **Instance ID** column to your workspace. (To select all your workspaces in the spreadsheet, filter the **Instance ID** column to **contains /workspaces/**.) Next, filter the **ResourceRate** column to show only rows where this rate is equal to zero. Now you'll see the data allocations from these various sources.
Also, if you move a subscription to the new Azure monitoring pricing model in Ap
- For best practices on how to configure and manage Azure Monitor to minimize your charges, see [Azure Monitor best practices - Cost management](best-practices-cost.md). +
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Agents|[Azure Monitor Agent overview](agents/agents-overview.md)|Log Analytics a
Alerts|[Common alert schema](alerts/alerts-common-schema.md)|Updated alert payload common schema to include custom properties.| Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Clarified use of basic auth in webhook.| Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|We've made it easier to understand where to find iLogger telemetry.|
-Application-Insights|[Set up Azure Monitor for your Python application](app/opencensus-python.md)|Updated telemetry type mappings code sample.|
+Application-Insights|[Set up Azure Monitor for your Python application](/previous-versions/azure/azure-monitor/app/opencensus-python)|Updated telemetry type mappings code sample.|
Application-Insights|[Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](app/javascript-feature-extensions.md)|Code samples updated to use connection strings.| Application-Insights|[Connection strings](app/sdk-connection-string.md)|Code samples updated for .NET 6/7.| Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|Code samples updated for .NET 6/7.|
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
|[Application Insights Overview dashboard](app/overview-dashboard.md)|Added important information clarifying that moving or renaming resources breaks dashboards, with more instructions on how to resolve this scenario.| |[Application Insights override default SDK endpoints](/previous-versions/azure/azure-monitor/app/create-new-resource#override-default-endpoints)|Clarified that endpoint modification isn't recommended and to use connection strings instead.| |[Continuous export of telemetry from Application Insights](/previous-versions/azure/azure-monitor/app/export-telemetry)|Added important information about avoiding duplicates when you save diagnostic logs in a Log Analytics workspace.|
-|[Dependency tracking in Application Insights with OpenCensus Python](app/opencensus-python-dependency.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
-|[Incoming request tracking in Application Insights with OpenCensus Python](app/opencensus-python-request.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
-|[Monitor Python applications with Azure Monitor](app/opencensus-python.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Dependency tracking in Application Insights with OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-dependency)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Incoming request tracking in Application Insights with OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-request)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Monitor Python applications with Azure Monitor](/previous-versions/azure/azure-monitor/app/opencensus-python)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Updated connection string overrides example.| |[Application Insights SDK for ASP.NET Core applications](app/tutorial-asp-net-core.md)|Added a new tutorial with step-by-step instructions on how to use the Application Insights SDK with .NET Core applications.| |[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Updated and clarified the SDK support guidance.|
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 08/09/2023 Last updated : 09/29/2023 # Resource limits for Azure NetApp Files
For volumes 100 TiB or under, if you've allocated at least 5 TiB of quota for a
For volumes 100 TiB or under, you can increase the `maxfiles` limit up to 531,278,150 if your volume quota is at least 25 TiB. >[!IMPORTANT]
-> Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, if you have crossed the 63,753,378 `maxfiles` limit, the volume quota cannot be reduced below its corresponding index of 2 TiB.
+> When files or folders are allocated to an Azure NetApp Files volume, they count against the `maxfiles` limit. If a file or folder is deleted, the internal data structures for `maxfiles` allocation remain the same. For instance, if the files used in a volume increase to 63,753,378 and 100,000 files are deleted, the `maxfiles` allocation will remain at 63,753,378.
+> Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, the `maxfiles` limit for a 2 TiB volume is 63,753,378. If you create more than 63,753,378 files in that volume, the volume quota cannot be reduced below its corresponding index of 2 TiB.
**For [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes):**
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 09/13/2023 Last updated : 09/29/2023
Azure NetApp Files backup is supported for the following regions:
* South India * Southeast Asia * Sweden Central
+* UAE Central
* UAE North * UK South * West Europe
azure-relay Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/diagnostic-logs.md
The new settings take effect in about 10 minutes. The logs are displayed in the
## Schema for hybrid connections events
-Hybrid connections event log JSON strings include the elements listed in the following table:
+Hybrid Connections event log JSON strings include the elements listed in the following table:
| Name | Description | | - | - |
Here's a sample hybrid connections event in JSON format.
} ``` +
+## Schema for VNet/IP Filtering Connection Logs
+Hybrid Connections VNet/IP Filtering Connection Logs include elements listed in the following table:
+
+| Name | Description | Supported in Azure Diagnostics | Supported in AZMSVnetConnectionEvents (Resource specific table)
+| | -- || |
+| `SubscriptionId` | Azure subscription ID | Yes | Yes
+| `NamespaceName` | Namespace name | Yes | Yes
+| `IPAddress` | IP address of a client connecting to the Service Bus service | Yes | Yes
+| `AddressIP` | IP address of client connecting to service bus | Yes | Yes
+| `TimeGenerated [UTC]`|Time of executed operation (in UTC) | Yes | Yes
+| `Action` | Action done by the Service Bus service when evaluating connection requests. Supported actions are **Accept Connection** and **Deny Connection**. | Yes | Yes
+| `Reason` | Provides a reason why the action was done | Yes | Yes
+| `Count` | Number of occurrences for the given action | Yes | Yes
+| `ResourceId` | Azure Resource Manager resource ID. | Yes | Yes
+| `Category` | Log Category | Yes | No
+| `Provider`|Name of Service emitting the logs e.g., ServiceBus | No | Yes
+| `Type` | Type of Logs Emitted | No | Yes
+
+> [!NOTE]
+> Virtual network logs are generated only if the namespace allows access from selected networks or from specific IP addresses (IP filter rules).
+
+## Sample VNet and IP Filtering Logs
+Here's an example of a virtual network log JSON string:
+
+AzureDiagnostics:
+```json
+{
+ "SubscriptionId": "0000000-0000-0000-0000-000000000000",
+ "NamespaceName": "namespace-name",
+ "IPAddress": "1.2.3.4",
+ "Action": "Accept Connection",
+ "Reason": "IP is accepted by IPAddress filter.",
+ "Count": 1,
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.RELAY/NAMESPACES/<RELAY NAMESPACE NAME>",
+ "Category": "VNetAndIPFilteringLogs"
+}
+```
+Resource specific table entry:
+```json
+{
+ "SubscriptionId": "0000000-0000-0000-0000-000000000000",
+ "NamespaceName": "namespace-name",
+ "AddressIp": "1.2.3.4",
+ "Action": "Accept Connection",
+ "Message": "IP is accepted by IPAddress filter.",
+ "Count": 1,
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.RELAY/NAMESPACES/<RELAY NAMESPACE NAME>",
+ "Provider" : "RELAY",
+ "Type": "AZMSVNetConnectionEvents"
+}
+```
++ ## Events and operations captured in diagnostic logs | Operation | Description |
azure-video-indexer Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/object-detection.md
+
+ Title: Azure AI Video Indexer object detection overview
+description: An introduction to Azure AI Video Indexer object detection overview.
+ Last updated : 09/26/2023+++++
+# Azure Video Indexer object detection
+
+Azure Video Indexer can detect objects in videos. The insight is part of all standard and advanced presets.
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## JSON keys and definitions
+
+| **Key** | **Definition** |
+| | |
+| ID | Incremental number of IDs of the detected objects in the media file |
+| Type | Type of objects, for example, Car
+| ThumbnailID | GUID representing a single detection of the object |
+| displayName | Name to be displayed in the VI portal experience |
+| WikiDataID | A unique identifier in the WikiData structure |
+| Instances | List of all instances that were tracked
+| Confidence | A score between 0-1 indicating the object detection confidence |
+| adjustedStart | adjusted start time of the video when using the editor |
+| adjustedEnd | adjusted end time of the video when using the editor |
+| start | the time that the object appears in the frame |
+| end | the time that the object no longer appears in the frame |
+
+## JSON response
+
+Object detection is included in the insights that are the result of an [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request.
+
+### Detected and tracked objects
+
+Detected and tracked objects appear under ΓÇ£detected ObjectsΓÇ¥ in the downloaded *insights.json* file. Every time a unique object is detected, it's given an ID. That object is also tracked, meaning that the model watches for the detected object to return to the frame. If it does, another instance is added to the instances for the object with different start and end times.
+
+In this example, the first car was detected and given an ID of 1 since it was also the first object detected. Then, a different car was detected and that car was given the ID of 23 since it was the 23rd object detected. Later, the first car appeared again and another instance was added to the JSON. Here is the resulting JSON:
+
+```json
+detectedObjects: [
+ {
+ id: 1,
+ type: "Car",
+ thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t33",
+ displayName: "car",
+ wikiDataId: "Q1420",
+ instances: [
+ {
+ confidence: 0.468,
+ adjustedStart: "0:00:00",
+ adjustedEnd: "0:00:02.44",
+ start: "0:00:00",
+ end: "0:00:02.44"
+ },
+ {
+ confidence: 0.53,
+ adjustedStart: "0:03:00",
+ adjustedEnd: "0:00:03.55",
+ start: "0:03:00",
+ end: "0:00:03.55"
+ }
+ ]
+ },
+ {
+ id: 23,
+ type: "Car",
+ thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t34",
+ displayName: "car",
+ wikiDataId: "Q1420",
+ instances: [
+ {
+ confidence: 0.427,
+ adjustedStart: "0:00:00",
+ adjustedEnd: "0:00:14.24",
+ start: "0:00:00",
+ end: "0:00:14.24"
+ }
+ ]
+ }
+]
+```
+
+## Try object detection
+
+You can try out object detection with the web portal or with the API.
+
+## [Web Portal](#tab/webportal)
+
+Once you have uploaded a video, you can view the insights. On the insights tab, you can view the list of objects detected and their main instances.
+
+### Insights
+Select the **Insights** tab. The objects are in descending order of the number of appearances in the video.
++
+### Timeline
+Select the **Timeline** tab.
++
+Under the timeline tab, all object detection is displayed according to the time of appearance. When you hover over a specific detection, it shows the detection percentage of certainty.
+
+### Player
+
+The player automatically marks the detected object with a bounding box. The selected object from the insights pane is highlighted in blue with the objects type and serial number also displayed.
+
+Filter the bounding boxes around objects by selecting bounding box icon on the player.
++
+Then, select or deselect the detected objects checkboxes.
++
+Download the insights by selecting **Download** and then **Insights (JSON)**.
+
+## [API](#tab/api)
+
+When you use the [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request with the standard or advanced video presets, object detection is included in the indexing.
+
+To examine object detection more thoroughly, use [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
+++
+## Supported objects
+
+ :::column:::
+ - airplane
+ - apple
+ - backpack
+ - banana
+ - baseball bat
+ - baseball glove
+ - bed
+ - bicycle
+ - bottle
+ - bowl
+ - broccoli
+ - bus
+ - cake
+ :::column-end:::
+ :::column:::
+ - car
+ - carrot
+ - cell phone
+ - chair
+ - clock
+ - computer mouse
+ - couch
+ - cup
+ - dining table
+ - donut
+ - fire hydrant
+ - fork
+ - frisbee
+ :::column-end:::
+ :::column:::
+ - handbag
+ - hot dog
+ - kite
+ - knife
+ - laptop
+ - microwave
+ - motorcycle
+ - necktie
+ - orange
+ - oven
+ - parking meter
+ - pizza
+ - potted plant
+ :::column-end:::
+ :::column:::
+ - refrigerator
+ - remote
+ - sandwich
+ - scissors
+ - skateboard
+ - skis
+ - snowboard
+ - spoon
+ - sports ball
+ - suitcase
+ - surfboard
+ - teddy bear
+ - television
+ :::column-end:::
+ :::column:::
+ - tennis racket
+ - toaster
+ - toilet
+ - toothbrush
+ - traffic light
+ - train
+ - umbrella
+ - vase
+ - wine glass
+ :::column-end:::
+
+## Limitations
+
+- Up to 20 detections per frame for standard and advanced processing and 35 tracks per class.
+- The video area shouldn't exceed 1920 x 1080 pixels.
+- Object size shouldn't be greater than 90 percent of the frame.
+- A high frame rate (> 30 FPS) may result in slower indexing, with little added value to the quality of the detection and tracking.
+- Other factors that may affect the accuracy of the object detection include low light conditions, camera motion, and occlusion.
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-delete-vault.md
To delete a vault, follow these steps:
Alternately, go to the blades manually by following the steps below. -- <a id="portal-mua">**Step 2:**</a> If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- <a id="portal-mua">**Step 2:**</a> If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- <a id="portal-disable-soft-delete">**Step 3:**</a> Disable the soft delete and Security features
If you're sure that all the items backed up in the vault are no longer required
Follow these steps: -- **Step 1:** Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- **Step 1:** Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- <a id="powershell-install-az-module">**Step 2:**</a> Upgrade to PowerShell 7 version by performing these steps:
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
Title: Overview of enhanced soft delete for Azure Backup (preview)
+ Title: Overview of enhanced soft delete for Azure Backup
description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 07/27/2023 Last updated : 09/11/2023
-# About Enhanced soft delete for Azure Backup (preview)
+# About enhanced soft delete for Azure Backup
[Soft delete](backup-azure-security-feature-cloud.md) for Azure Backup enables you to recover your backup data even after it's deleted. This is useful when:
*Basic soft delete* is available for Recovery Services vaults for a while; *enhanced soft delete* now provides additional data protection capabilities.
+>[!Note]
+>Once you enable enhanced soft delete by enabling soft delete state to *always-on*, you can't disable it for that vault.
+ ## What's soft delete? [Soft delete](backup-azure-security-feature-cloud.md) primarily delays permanent deletion of backup data and gives you an opportunity to recover data after deletion. This deleted data is retained for a specified duration (*14*-*180* days) called soft delete retention period.
The key benefits of enhanced soft delete are:
- **Soft delete across workloads**: Enhanced soft delete applies to all vaulted datasources alike and is supported for Recovery Services vaults and Backup vaults. Enhanced soft delete also applies to operational backups of disks and VM backup snapshots used for instant restores. However, unlike vaulted backups, these snapshots can be directly accessed and deleted before the soft delete period expires. Enhanced soft delete is currently not supported for operational backup for Blobs and Azure Files. - **Soft delete of recovery points**: This feature allows you to recover data from recovery points that might have been deleted due to making changes in a backup policy or changing the backup policy associated with a backup item. Soft delete of recovery points isn't supported for log recovery points in SQL and SAP HANA workloads. [Learn more](manage-recovery-points.md#impact-of-expired-recovery-points-for-items-in-soft-deleted-state).
-## Supported regions
--- Enhanced soft delete is available in all Azure public regions.-- Soft delete of recovery points is now available in all Azure public regions.- ## Supported scenarios - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.-- All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete. - Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups. ## States of soft delete settings
You can also use multi-user authorization (MUA) to add an additional layer of pr
## Next steps
-[Configure and manage enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-configure-manage.md).
+[Configure and manage enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-configure-manage.md).
backup Backup Azure Enhanced Soft Delete Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-configure-manage.md
Title: Configure and manage enhanced soft delete for Azure Backup (preview)
+ Title: Configure and manage enhanced soft delete for Azure Backup
description: This article describes about how to configure and manage enhanced soft delete for Azure Backup. Previously updated : 06/12/2023 Last updated : 09/11/2023
-# Configure and manage enhanced soft delete in Azure Backup (preview)
+# Configure and manage enhanced soft delete in Azure Backup
This article describes how to configure and use enhanced soft delete to protect your data and recover backups, if they're deleted.
+>[!Note]
+>Once you enable enhanced soft delete by enabling soft delete state to *always-on*, you can't disable it for that vault.
+ ## Before you start - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.-- It's supported for new and existing vaults.-- All existing Recovery Services vaults in the [preview regions](backup-azure-enhanced-soft-delete-about.md#supported-scenarios) are upgraded with an option to use enhanced soft delete.-- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- Enhanced soft delete applies to all vaulted workloads alike in Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, and Disk and VM snapshot backups.
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable the *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- There's no retention cost for the default soft delete duration of 14 days for vaulted backup, after which it incurs regular backup cost.
## Enable soft delete with always-on state
Here are some points to note:
## Delete recovery points
-Soft delete of recovery points helps you recover any recovery points that are accidentally or maliciously deleted for some operations that could lead to deletion of one or more recovery points. Recovery points don't move to soft-deleted state immediately and have a *24 hour SLA* (same as before). The example here shows recovery points that were deleted as part of backup policy modifications.
-
-[Soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points), a part of enhanced soft delete is currently available in selected Azure regions. [Learn more](backup-azure-enhanced-soft-delete-about.md#supported-regions) on the region availability.
+[Soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points) is a part of enhanced soft delete that helps you recover any recovery points that are accidentally or maliciously deleted for some operations that could lead to deletion of one or more recovery points. Recovery points don't move to soft-deleted state immediately and have a *24 hour SLA* (same as before). The example here shows recovery points that were deleted as part of backup policy modifications.
Follow these steps:
Follow these steps:
The impacted recovery points are labeled as *being soft deleted* in the **Recovery type** column and will be retained as per the soft delete retention of the vault.
- :::image type="content" source="./media/backup-azure-enhanced-soft-delete/select-restore-point-for-soft-delete.png" alt-text="Screenshot shows to filter recovery points for soft delete.":::
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/select-restore-point-for-soft-delete.png" alt-text="Screenshot shows how to filter recovery points for soft delete.":::
## Undelete recovery points
-You can *undelete* recovery points that are in soft deleted state so that they can last till their expiry by modifying the policy again to increase the retention of backups.
+You can *undelete* recovery points that are in soft deleted state so that they can last until their expiry by modifying the policy again to increase the retention of backups.
Follow these steps:
Follow these steps:
## Next steps
-[About Enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-about.md).
+[About enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
backup Backup Azure Enhanced Soft Delete Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-tutorial.md
+
+ Title: Tutorial - Recover soft deleted data and recovery points using enhanced soft delete in Azure Backup
+description: Learn how to enable enhanced soft delete and recover your data and recover backups, if they're deleted.
+ Last updated : 09/11/2023+++++
+# Tutorial: Recover soft deleted data and recovery points using enhanced soft delete in Azure Backup
+
+This tutorial describes how to enable enhanced soft delete and recover your data and recover backups, if they're deleted.
+
+[Enhanced soft delete](backup-azure-enhanced-soft-delete-about.md) provides an improvement to the [soft delete](backup-azure-security-feature-cloud.md) capability in Azure Backup that enables you to recover your backup data in case of accidental or malicious deletion. With enhanced soft delete, you get the ability to make soft delete always-on, thus protecting it from being disabled by any malicious actors. So, enhanced soft delete provides better protection for your backups against various threats. This feature also allows you to provide a customizable soft delete retention period for which soft deleted data must be retained.
+
+>[!Note]
+>Once you enable the *always-on* state for soft delete, you can't disable it for that vault.
+
+## Before you start
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.
+- Enhanced soft delete applies to all vaulted workloads alike in Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, and Disk and VM snapshot backups.
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable the *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- There's no retention cost for the default soft delete duration of 14 days for vaulted backup, after which it incurs regular backup cost.
+
+## Enable soft delete with always-on state
+
+Soft delete is enabled by default for all new vaults you create. To make enabled settings irreversible, select **Enable Always-on Soft Delete**.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to **Recovery Services vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-inline.png" alt-text="Screenshot showing you how to open Soft Delete blade." lightbox="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-expanded.png":::
+
+ The soft delete settings for cloud and hybrid workloads are already enabled, unless you've explicitly disabled them earlier.
+
+1. If soft delete settings are disabled for any workload type in the **Soft Delete** blade, select the respective checkboxes to enable them.
+
+ >[!Note]
+ >Enabling soft delete for hybrid workloads also enables other security settings, such as Multi-factor authentication and alert notification for back up of workloads running in the on-premises servers.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >- There is no cost for soft delete for *14* days. However, deleted instances in soft delete state are charged if the soft delete retention period is *>14* days. Learn about [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+ >- Once configured, the soft delete retention period applies to all soft deleted instances of cloud and hybrid workloads in the vault.
+
+1. Select the **Enable Always-on Soft delete** checkbox to enable soft delete and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete.png" alt-text="Screenshot showing you how to enable a;ways-on state of soft delete.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to **Backup vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties.png" alt-text="Screenshot showing you how to open soft delete blade for Backup vault.":::
+
+ Soft delete is enabled by default with the checkboxes selected.
+
+1. If you've explicitly disabled soft delete for any workload type in the **Soft Delete** blade earlier, select the checkboxes to enable them.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >There is no cost for enabling soft delete for *14* days. However, you're charged for the soft delete instances if soft delete retention period is *>14* days. Learn about the [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+
+1. Select the **Enable Always-on Soft Delete** checkbox to enable soft delete always-on and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete-backup-vault.png" alt-text="Screenshot showing you how to enable always-on state for Backup vault.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+++
+## Delete a backup item
+
+You can delete backup items/instances even if the soft delete settings are enabled. However, if the soft delete is enabled, the deleted items don't get permanently deleted immediately and stays in soft deleted state as per [configured retention period](#enable-soft-delete-with-always-on-state). Soft delete delays permanent deletion of backup data by retaining deleted data for *14*-*180* days.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to delete.
+1. Select **Stop backup**.
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+1. Provide the applicable information, and then select **Stop backup** to delete all backups for the instance.
+
+ Once the *delete* operation completes, the backup item is moved to soft deleted state. In **Backup items**, the soft deleted item is marked in *Red*, and the last backup status shows that backups are disabled for the item.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-inline.png" alt-text="Screenshot showing the soft deleted backup items marked red." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-backup-items-marked-red-expanded.png":::
+
+ In the item details, the soft deleted item shows no recovery point. Also, a notification appears to mention the state of the item, and the number of days left before the item is permanently deleted. You can select **Undelete** to recover the soft deleted items.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-inline.png" alt-text="Screenshot showing the soft deleted backup item that shows no recovery point." lightbox="./media/backup-azure-enhanced-soft-delete/soft-deleted-item-shows-no-recovery-point-expanded.png":::
+
+>[!Note]
+>When the item is in soft deleted state, no recovery points are cleaned on their expiry as per the backup policy.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. In the **Backup center**, go to the *backup instance* that you want to delete.
+
+1. Select **Stop backup**.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-inline.png" alt-text="Screenshot showing how to initiate the stop backup process for backup items in Backup vault." lightbox="./media/backup-azure-enhanced-soft-delete/stop-backup-for-backup-vault-items-expanded.png":::
+
+ You can also select **Delete** in the instance view to delete backups.
+
+1. On the **Stop Backup** page, select **Delete Backup Data** from the drop-down list to delete all backups for the instance.
+
+1. Provide the applicable information, and then select **Stop backup** to initiate the deletion of the backup instance.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-stop-backup-process.png" alt-text="Screenshot showing how to stop the backup process.":::
+
+ Once deletion completes, the instance appears as *Soft deleted*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-inline.png" alt-text="Screenshot showing the deleted backup items marked as Soft Deleted." lightbox="./media/backup-azure-enhanced-soft-delete/deleted-backup-items-marked-soft-deleted-expanded.png":::
+++
+## Recover a soft-deleted backup item
+
+If a backup item/ instance is soft deleted, you can recover it before it's permanently deleted.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the *backup item* that you want to retrieve from the *soft deleted* state.
+
+ You can also use the **Backup center** to go to the item by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted item*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-inline.png" alt-text="Screenshot showing how to start recovering backup items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-backup-items-expanded.png":::
+
+1. In the **Undelete** *backup item* blade, select **Undelete** to recover the deleted item.
+
+ All recovery points now appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this item, select **Resume backup**.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to the *deleted backup instance* that you want to recover.
+
+ You can also use the **Backup center** to go to the *instance* by applying the filter **Protection status == Soft deleted** in the *Backup instances*.
+
+1. Select **Undelete** corresponding to the *soft deleted instance*.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-inline.png" alt-text="Screenshot showing how to start recovering deleted backup vault items from soft delete state." lightbox="./media/backup-azure-enhanced-soft-delete/start-recover-deleted-backup-vault-items-expanded.png":::
+
+1. In the **Undelete** *backup instance* blade, select **Undelete** to recover the item.
+
+ All recovery points appear and the backup item changes to *Stop protection with retain data* state. However, backups don't resume automatically. To continue taking backups for this instance, select **Resume backup**.
+
+>[!Note]
+>Undeleting a soft deleted item reinstates the backup item into Stop backup with retain data state and doesn't automatically restart scheduled backups. You need to explicitly [resume backups](backup-azure-manage-vms.md#resume-protection-of-a-vm) if you want to continue taking new backups. Resuming backup will also clean up expired recovery points, if any.
++++
+>- MUA for soft delete is currently supported for Recovery Services vaults only.
+
+## Next steps
+
+- Learn more about [enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
+- Learn more about [soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points).
backup Enable Multi User Authorization Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/enable-multi-user-authorization-quickstart.md
Title: Quickstart - Multi-user authorization using Resource Guard description: In this quickstart, learn how to use Multi-user authorization to protect against unauthorized operation.- Previously updated : 05/05/2022+ Last updated : 09/25/2023
-# Quickstart: Enable protection using Multi-user authorization on Recovery Services vault in Azure Backup
-
-Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization. Learn about [MUA concepts](multi-user-authorization-concept.md).
+# Quickstart: Enable protection using Multi-user authorization in Azure Backup
This quickstart describes how to enable Multi-user authorization (MUA) for Azure Backup.
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization.
+
+>[!Note]
+>MUA is now generally available for both Recovery Services vaults and Backup vaults.
+
+Learn about [MUA concepts](multi-user-authorization-concept.md).
+ ## Prerequisites Before you start:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+ - Ensure the Resource Guard and the Recovery Services vault are in the same Azure region. - Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation. - Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1). - Ensure that you [create a Resource Guard](multi-user-authorization.md#create-a-resource-guard) in a different subsctiption/tenant as that of the vault located in the same region. - Ensure to [assign permissions to the Backup admin on the Resource Guard to enable MUA](multi-user-authorization.md#assign-permissions-to-the-backup-admin-on-the-resource-guard-to-enable-mua).
+# [Backup vault](#tab/backup-vault)
+
+- Ensure the Resource Guard and the Backup vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions contain the Backup vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the provider - **Microsoft.DataProtection**4. For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+++ ## Enable MUA
-The Backup admin now has the Reader role on the Resource Guard and can easily enable multi-user authorization on vaults managed by them.
+Once the Backup admin has the Reader role on the Resource Guard, they can enable multi-user authorization on vaults managed by following these steps:
+
+**Choose a vault**
-Follow these steps:
+# [Recovery Services vault](#tab/recovery-services-vault)
-1. Go to the Recovery Services vault.
-1. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
-1. The option to enable MUA appears. Choose a Resource Guard using one of the following ways:
+1. Go to the Recovery Services vault for which you want to configure MUA.
- 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+1. On the left pane, select **Properties**.
- 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
1. Click **Select Resource Guard**
- 1. Click on the dropdown and select the directory the Resource Guard is in.
- 1. Click **Authenticate** to validate your identity and access.
+ 1. Select the dropdown list and choose the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
1. After authentication, choose the **Resource Guard** from the list displayed.
-1. Click **Save** once done to enable MUA.
+1. Select **Save** to enable MUA.
+
+# [Backup vault](#tab/backup-vault)
+
+1. Go to the Backup vault for which you want to configure MUA.
+1. On the left panel, select **Properties**.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**.
+ 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+1. Select **Save** to enable MUA.
++ ## Next steps - [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)-- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- [Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval)-- [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
+- Disable MUA on a [Recovery Services vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-recovery-services-vault#disable-mua-on-a-recovery-services-vault) or a [Backup vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-backup-vault#disable-mua-on-a-backup-vault).
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Previously updated : 09/15/2022 Last updated : 09/25/2023
-# Multi-user authorization using Resource Guard
+# About Multi-user authorization using Resource Guard
Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization. >[!Note]
->Multi-user authorization using Resource Guard for Backup vault is in preview.
+>Multi-user authorization using Resource Guard for Backup vault is now generally available.
## How does MUA for Backup work?
Modify protection (reduced retention) | Optional
Stop protection with delete data | Optional Change MARS security PIN | Optional
-# [Backup vault (preview)](#tab/backup-vault)
+# [Backup vault](#tab/backup-vault)
**Operation** | **Mandatory/ Optional** |
The following table lists the scenarios for creating your Resource Guard and vau
**Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes** | | | |
-Vault and Resource Guard are **in the same subscription.** </br> The Backup admin does't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
+Vault and Resource Guard are **in the same subscription.** </br> The Backup admin doesn't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
Vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin doesn't have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription. Vault and Resource Guard are **in different tenants.** </br> The Backup admin doesn't have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory.
backup Multi User Authorization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-tutorial.md
Title: Tutorial - Enable Multi-user authorization using Resource Guard
-description: In this tutorial, you'll learn about how create a resource guard and enable Multi-user authorization on Recovery Services vault for Azure Backup.
+description: In this tutorial, you'll learn about how create a resource guard and enable Multi-user authorization on Recovery Services vault and Backup vault for Azure Backup.
Previously updated : 05/05/2022 Last updated : 09/25/2023 # Tutorial: Create a Resource Guard and enable Multi-user authorization in Azure Backup
-This tutorial describes how to create a Resource Guard and enable Multi-user authorization on a Recovery Services vault. This adds an additional layer of protection to critical operations on your Recovery Services vaults.
-
-This tutorial includes the following:
-
->[!div class="checklist"]
->- Prerequisies
->- Create a Resource Guard
->- Enable MUA on a Recovery Services vault
+This tutorial describes how to create a Resource Guard and enable Multi-user authorization (MUA) on a Recovery Services vault and Backup vault. This adds an additional layer of protection to critical operations on your vaults.
>[!NOTE]
-> Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization is now generally available for both Recovery Services vaults and Backup vaults.
+>- Multi-user authorization for Azure Backup is available in all public Azure regions.
+
+Learn about [MUA concepts](multi-user-authorization-concept.md).
## Prerequisites Before you start:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+ - Ensure the Resource Guard and the Recovery Services vault are in the same Azure region. - Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation. - Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+# [Backup vault](#tab/backup-vault)
+
+- Ensure the Resource Guard and the Backup vault are in the same Azure region.
+- Ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.
+- Ensure that your subscriptions contain the Backup vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the provider - **Microsoft.DataProtection**4. For more information, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
+++ Learn about various [MUA usage scenarios](multi-user-authorization-concept.md#usage-scenarios). ## Create a Resource Guard
+The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault.
+ >[!Note]
->The **Security admin** creates the Resource Guard. We recommend that you create it in a **different subscription** or a **different tenant** as the vault. However, it should be in the **same region** as the vault. The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
+> The Backup admin must **NOT** have *contributor* access on the Resource Guard or the subscription that contains it.
>
->Create the Resource Guard in a tenant different from the vault tenant.
-Follow these steps:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+To create the Resource Guard in a tenant different from the vault tenant as a Security admin, follow these steps:
1. In the Azure portal, go to the directory under which you wish to create the Resource Guard. 1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down.
- - Click **Create** to start creating a Resource Guard.
- - In the create blade, fill in the required details for this Resource Guard.
+ 1. Select **Create** to start creating a Resource Guard.
+ 1. In the **Create** blade, fill in the required details for this Resource Guard.
- Make sure the Resource Guard is in the same Azure regions as the Recovery Services vault. - Also, it is helpful to add a description of how to get or request access to perform actions on associated vaults when needed. This description would also appear in the associated vaults to guide the backup admin on getting the required permissions. You can edit the description later if needed, but having a well-defined description at all times is encouraged.
Follow these steps:
You can also [select the operations to be protected after creating the resource guard](#select-operations-to-protect-using-resource-guard). 1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Click **Review + Create**.
-1. Follow notifications for status and successful creation of the Resource Guard.
+1. Select **Review + Create** and then follow notifications for status and successful creation of the Resource Guard.
+
+# [Backup vault](#tab/backup-vault)
+
+To create the Resource Guard in a tenant different from the vault tenant as a Security admin, follow these steps:
+
+1. In the Azure portal, go to the directory under which you want to create the Resource Guard.
+
+1. Search for **Resource Guards** in the search bar and select the corresponding item from the dropdown list.
+
+ 1. Select **Create** to create a Resource Guard.
+ 1. In the **Create** blade, fill in the required details for this Resource Guard.
+ - Ensure that the Resource Guard is in the same Azure regions as the Backup vault.
+ - Add a description on how to request access to perform actions on associated vaults when needed. This description appears in the associated vaults to guide the Backup admin on how to get the required permissions.
+
+1. On the **Protected operations** tab, select the operations you need to protect using this resource guard under the **Backup vault** tab.
+
+ Currently, the **Protected operations** tab includes only the *Delete backup instance* option to disable.
+
+ You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard).
+
+1. Optionally, add any tags to the Resource Guard as per the requirements.
+1. Select **Review + Create** and then follow the notifications to monitor the status and a successful creation of the Resource Guard.
++ ### Select operations to protect using Resource Guard
->[!Note]
->Choose the operations you want to protect using the Resource Guard out of all supported critical operations. By default, all supported critical operations are enabled. However, you can exempt certain operations from falling under the purview of MUA using Resource Guard. The security admin can perform the following steps:
+After vault creation, the Security admin can also choose the operations for protection using the Resource Guard among all supported critical operations. By default, all supported critical operations are enabled. However, the Security admin can exempt certain operations from falling under the purview of MUA using Resource Guard.
+
+**Choose a vault**
-Follow these steps:
+# [Recovery Services vault](#tab/recovery-services-vault)
-1. In the Resource Guard created above, go to **Properties**.
+To select the operations for protection, follow these steps:
+
+1. In the Resource Guard created above, go to **Properties** > **Recovery Services vault** tab.
1. Select **Disable** for operations that you wish to exclude from being authorized using the Resource Guard. >[!Note]
Follow these steps:
1. Optionally, you can also update the description for the Resource Guard using this blade. 1. Select **Save**.
+# [Backup vault](#tab/backup-vault)
+
+To select the operations for protection, follow these steps:
+
+1. In the Resource Guard that you've created, go to **Properties** > **Backup vault** tab.
+1. Select **Disable** for the operations that you want to exclude from being authorized.
+
+ You can't disable the **Remove MUA protection** and **Disable soft delete** operations.
+
+1. Optionally, in the **Backup vaults** tab, update the description for the Resource Guard.
+1. Select **Save**.
+++ ## Assign permissions to the Backup admin on the Resource Guard to enable MUA
->[!Note]
->To enable MUA on a vault, the admin of the vault must have **Reader** role on the Resource Guard or subscription containing the Resource Guard. To assign the **Reader** role on the Resource Guard:
+The Backup admin must have **Reader** role on the Resource Guard or subscription that contains the Resource Guard to enable MUA on a vault. The Security admin needs to assign this role to the Backup admin.
+
+**Choose a vault**
-Follow these steps:
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+To assign the **Reader** role on the Resource Guard, follow these steps:
1. In the Resource Guard created above, go to the Access Control (IAM) blade, and then go to **Add role assignment**.
-1. Select **Reader** from the list of built-in roles and click **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles and select **Next**.
1. Click **Select members** and add the Backup adminΓÇÖs email ID to add them as the **Reader**. Since the Backup admin is in another tenant in this case, they will be added as guests to the tenant containing the Resource Guard. 1. Click **Select** and then proceed to **Review + assign** to complete the role assignment.
-## Enable MUA on a Recovery Services vault
+# [Backup vault](#tab/backup-vault)
->[!Note]
->The Backup admin now has the Reader role on the Resource Guard and can easily enable multi-user authorization on vaults managed by them and performs the following steps.
+To assign the **Reader** role on the Resource Guard, follow these steps:
+
+1. In the Resource Guard created above, go to the **Access Control (IAM)** blade, and then go to **Add role assignment**.
+
+
+1. Select **Reader** from the list of built-in roles and select **Next**.
+
+1. Click **Select members** and add the Backup admin's email ID to assign the **Reader** role.
+
+ As the Backup admins are in another tenant, they'll be added as guests to the tenant that contains the Resource Guard.
+
+1. Click **Select** > **Review + assign** to complete the role assignment.
++++
+## Enable MUA on a vault
+
+Once the Backup admin has the Reader role on the Resource Guard, they can enable multi-user authorization on vaults managed by following these steps:
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
1. Go to the Recovery Services vault.
-1. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. Go to **Properties** > **Multi-User Authorization**, and then select **Update**.
1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways: 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
Follow these steps:
1. Or you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region. 1. Click **Select Resource Guard**
- 1. Click on the dropdown and select the directory the Resource Guard is in.
- 1. Click **Authenticate** to validate your identity and access.
+ 1. Select the dropdown list and choose the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
1. After authentication, choose the **Resource Guard** from the list displayed.
-1. Click **Save** once done to enable MUA.
+1. Select **Save** to enable MUA.
+
+# [Backup vault](#tab/backup-vault)
+
+1. Go to the Backup vault for which you want to configure MUA.
+1. On the left panel, select **Properties**.
+1. Go to **Multi-User Authorization** and select **Update**.
+
+1. To enable MUA and choose a Resource Guard, perform one of the following actions:
+
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+
+ 1. Click **Select Resource Guard**.
+ 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
+ 1. After authentication, choose the **Resource Guard** from the list displayed.
+
+1. Select **Save** to enable MUA.
++ ## Next steps - [Protected operations using MUA](multi-user-authorization.md?pivots=vaults-recovery-services-vault#protected-operations-using-mua)-- [Authorize critical (protected) operations using Azure AD Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- [Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management](multi-user-authorization.md#authorize-critical-protected-operations-using-azure-active-directory-privileged-identity-management)
- [Performing a protected operation after approval](multi-user-authorization.md#performing-a-protected-operation-after-approval)-- [Disable MUA on a Recovery Services vault](multi-user-authorization.md#disable-mua-on-a-recovery-services-vault)
+- Disable MUA on a [Recovery Services vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-recovery-services-vault#disable-mua-on-a-recovery-services-vault) or a [Backup vault](multi-user-authorization.md?tabs=azure-portal&pivots=vaults-backup-vault#disable-mua-on-a-backup-vault).
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
Title: Configure Multi-user authorization using Resource Guard
description: This article explains how to configure Multi-user authorization using Resource Guard. zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault Previously updated : 11/08/2022 Last updated : 09/25/2023
This article describes how to configure Multi-user authorization (MUA) for Azure
This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
-This document includes the following sections:
-
->[!div class="checklist"]
->- Before you start
->- Testing scenarios
->- Create a Resource Guard
->- Enable MUA on a Recovery Services vault
->- Protected operations on a vault using MUA
->- Authorize critical operations on a vault
->- Disable MUA on a Recovery Services vault
- >[!NOTE]
-> Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization using Resource Guard for Backup vault is now generally available. [Learn more](multi-user-authorization.md?pivots=vaults-backup-vault).
## Before you start
To create the Resource Guard in a tenant different from the vault tenant, follow
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings.":::
-1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
+1. Search for **Resource Guards** in the search bar, and then select the corresponding item from the drop-down list.
- :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+ :::image type="content" source="./media/multi-user-authorization/resource-guards.png" alt-text="Screenshot shows how to search resource guards." lightbox="./media/multi-user-authorization/resource-guards.png":::
- Select **Create** to start creating a Resource Guard. - In the create blade, fill in the required details for this Resource Guard.
To create the Resource Guard in a tenant different from the vault tenant, follow
You can also [select the operations for protection after creating the resource guard](?pivots=vaults-recovery-services-vault#select-operations-to-protect-using-resource-guard). 1. Optionally, add any tags to the Resource Guard as per the requirements
-1. Select **Review + Create**.
-
- Follow notifications for status and successful creation of the Resource Guard.
+1. Select **Review + Create** and follow notifications for status and successful creation of the Resource Guard.
# [PowerShell](#tab/powershell)
Choose the operations you want to protect using the Resource Guard out of all su
To exempt operations, follow these steps:
-1. In the Resource Guard created above, go to **Properties**.
+1. In the Resource Guard created above, go to **Properties** > **Recovery Services vault** tab.
2. Select **Disable** for operations that you want to exclude from being authorized using the Resource Guard. >[!Note]
To enable MUA on a vault, the admin of the vault must have **Reader** role on th
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control.":::
-1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles, and select **Next**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
After the Reader role assignment on the Resource Guard is complete, enable multi
To enable MUA on the vaults, follow these steps.
-1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+1. Go to the Recovery Services vault. Go to **Properties** on the left navigation panel, then to **Multi-User Authorization** and select **Update**.
:::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault properties."::: 1. Now, you're presented with the option to enable MUA and choose a Resource Guard using one of the following ways:
- 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
+ - You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
:::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png":::
- 1. Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
+ - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region.
1. Click **Select Resource Guard**
- 1. Click on the dropdown and select the directory the Resource Guard is in.
- 1. Click **Authenticate** to validate your identity and access.
+ 1. Select the dropdown list, and then choose the directory the Resource Guard is in.
+ 1. Select **Authenticate** to validate your identity and access.
1. After authentication, choose the **Resource Guard** from the list displayed. :::image type="content" source="./media/multi-user-authorization/testvault1-multi-user-authorization-inline.png" alt-text="Screenshot showing multi-user authorization." lightbox="./media/multi-user-authorization/testvault1-multi-user-authorization-expanded.png" :::
Depicted below is an illustration of what happens when the Backup admin tries to
:::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the Test Vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
-## Authorize critical (protected) operations using Azure AD Privileged Identity Management
+## Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management
The following sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). >[!NOTE] >Though using Azure AD PIM is the recommended approach, you can use manual or custom methods to manage access for the Backup admin on the Resource Guard. For managing access to the Resource Guard manually, use the ΓÇÿAccess control (IAM)ΓÇÖ setting on the left navigation bar of the Resource Guard and grant the **Contributor** role to the Backup admin.
-### Create an eligible assignment for the Backup admin (if using Azure AD Privileged Identity Management)
+### Create an eligible assignment for the Backup admin (if using Azure Active Directory Privileged Identity Management)
The Security admin can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation. To do so, the **security admin** performs the following:
By default, the setup above may not have an approver (and an approval flow requi
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add contributor.":::
-1. If the setting named **Approvers** shows *None* or displays incorrect approvers, select **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
+1. If the setting named **Approvers** shows *None* or display incorrect approver(s), select **Edit** to add the reviewers who would need to review and approve the activation request for the Contributor role.
1. On the **Activation** tab, select **Require approval to activate** and add the approver(s) who need to approve each request. You can also select other security options like using MFA and mandating ticket options to activate the Contributor role. Optionally, select relevant settings on the **Assignment** and **Notification** tabs as per your requirements.
The tenant ID is required if the resource guard exists in a different tenant.
::: zone pivot="vaults-backup-vault"
-This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Backup vault (preview).
-
->[!Note]
->Multi-user authorization using Resource Guard for Backup vault is in preview.
+This article describes how to configure Multi-user authorization (MUA) for Azure Backup to add an additional layer of protection to critical operations on your Backup vault.
This article demonstrates Resource Guard creation in a different tenant that offers maximum protection. It also demonstrates how to request and approve requests for performing critical operations using [Azure Active Directory Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md) in the tenant housing the Resource Guard. You can optionally use other mechanisms to manage JIT permissions on the Resource Guard as per your setup.
-This document includes the following sections:
-
->[!div class="checklist"]
->- Before you start
->- Testing scenarios
->- Create a Resource Guard
->- Enable MUA on a Backup vault
->- Protected operations on a vault using MUA
->- Authorize critical operations on a vault
->- Disable MUA on a Backup vault
- >[!NOTE]
->Multi-user authorization for Azure Backup is available in all public Azure regions.
+>- Multi-user authorization using Resource Guard for Backup vault is now generally available.
+>- Multi-user authorization for Azure Backup is available in all public Azure regions.
## Before you start
To create the Resource Guard in a tenant different from the vault tenant as a Se
:::image type="content" source="./media/multi-user-authorization/portal-settings-directories-subscriptions.png" alt-text="Screenshot showing the portal settings to configure for Backup vault.":::
-1. Search for **Resource Guards** in the search bar and select the corresponding item from the drop-down list.
+1. Search for **Resource Guards** in the search bar, and then select the corresponding item from the dropdown list.
- :::image type="content" source="./media/multi-user-authorization/resource-guards-preview-inline.png" alt-text="Screenshot showing resource guards for Backup vault." lightbox="./media/multi-user-authorization/resource-guards-preview-expanded.png":::
+ :::image type="content" source="./media/multi-user-authorization/resource-guards.png" alt-text="Screenshot showing resource guards for Backup vault." lightbox="./media/multi-user-authorization/resource-guards.png":::
1. Select **Create** to create a Resource Guard. 1. In the Create blade, fill in the required details for this Resource Guard.
- - Ensure that the Resource Guard is in the same Azure regions as the Backup vault.
+ - Ensure that the Resource Guard is in the same Azure region as the Backup vault.
- Add a description on how to request access to perform actions on associated vaults when needed. This description appears in the associated vaults to guide the Backup admin on how to get the required permissions. 1. On the **Protected operations** tab, select the operations you need to protect using this resource guard under the **Backup vault** tab.
To create the Resource Guard in a tenant different from the vault tenant as a Se
:::image type="content" source="./media/multi-user-authorization/backup-vault-select-operations-for-protection.png" alt-text="Screenshot showing how to select operations for protecting using Resource Guard."::: 1. Optionally, add any tags to the Resource Guard as per the requirements.
-1. Select **Review + Create** and then follow the notifications to monitor the status and a successful creation of the Resource Guard.
+1. Select **Review + Create** and then follow the notifications to monitor the status and the successful creation of the Resource Guard.
### Select operations to protect using Resource Guard
To select the operations for protection, follow these steps:
1. In the Resource Guard that you've created, go to **Properties** > **Backup vault** tab. 1. Select **Disable** for the operations that you want to exclude from being authorized.
- You can't disable the **Remove MUA protection** operation.
+ You can't disable the **Remove MUA protection** and **Disable soft delete** operations.
1. Optionally, in the **Backup vaults** tab, update the description for the Resource Guard. 1. Select **Save**.
To assign the **Reader** role on the Resource Guard, follow these steps:
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-access-control.png" alt-text="Screenshot showing demo resource guard-access control for Backup vault.":::
-1. Select **Reader** from the list of built-in roles and select **Next** on the bottom of the screen.
+1. Select **Reader** from the list of built-in roles, and select **Next**.
:::image type="content" source="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-inline.png" alt-text="Screenshot showing demo resource guard-add role assignment for Backup vault." lightbox="./media/multi-user-authorization/demo-resource-guard-add-role-assignment-expanded.png":::
Once the Backup admin has the Reader role on the Resource Guard, they can enable
1. To enable MUA and choose a Resource Guard, perform one of the following actions:
- - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
+ - You can either specify the URI of the Resource Guard. Ensure that you specify the URI of a Resource Guard you have **Reader** access to and it's in the same region as the vault. You can find the URI (Resource Guard ID) of the Resource Guard on its **Overview** page.
:::image type="content" source="./media/multi-user-authorization/resource-guard-rg-inline.png" alt-text="Screenshot showing the Resource Guard for Backup vault protection." lightbox="./media/multi-user-authorization/resource-guard-rg-expanded.png"::: - Or, you can select the Resource Guard from the list of Resource Guards you have **Reader** access to, and those available in the region. 1. Click **Select Resource Guard**.
- 1. Select the drop-down and select the directory the Resource Guard is in.
+ 1. Select the dropdown and select the directory the Resource Guard is in.
1. Select **Authenticate** to validate your identity and access. 1. After authentication, choose the **Resource Guard** from the list displayed.
To perform a protected operation (disabling MUA), follow these steps:
:::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the test Backup vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
-## Authorize critical (protected) operations using Azure AD Privileged Identity Management
+## Authorize critical (protected) operations using Azure Active Directory Privileged Identity Management
-There are scenarios where you may need to perform critical operations on your backups and you can perform them with the right approvals or permissions with MUA. The following sections explain on how to authorize the critical operation requests using Privileged Identity Management (PIM).
+There are scenarios where you may need to perform critical operations on your backups and you can perform them with the right approvals or permissions with MUA. The following sections explain how to authorize the critical operation requests using Privileged Identity Management (PIM).
The Backup admin must have a Contributor role on the Resource Guard to perform critical operations in the Resource Guard scope. One of the ways to allow just-in-time (JIT) operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md). >[!NOTE]
->We recommend to use the Azure AD PIM. However, you can also use manual or custom methods to manage access for the Backup admin on the Resource Guard. To manually manage access to the Resource Guard, use the *Access control (IAM)* setting on the left pane of the Resource Guard and grant the **Contributor** role to the Backup admin.
+>We recommend that you use the Azure AD PIM. However, you can also use manual or custom methods to manage access for the Backup admin on the Resource Guard. To manually manage access to the Resource Guard, use the *Access control (IAM)* setting on the left pane of the Resource Guard and grant the **Contributor** role to the Backup admin.
-### Create an eligible assignment for the Backup admin using Azure AD Privileged Identity Management
+### Create an eligible assignment for the Backup admin using Azure Active Directory Privileged Identity Management
The **Security admin** can use PIM to create an eligible assignment for the Backup admin as a Contributor to the Resource Guard. This enables the Backup admin to raise a request (for the Contributor role) when they need to perform a protected operation.
By default, the above setup may not have an approver (and an approval flow requi
:::image type="content" source="./media/multi-user-authorization/add-contributor.png" alt-text="Screenshot showing how to add a contributor.":::
-1. Select **Edit** to add the reviewers who must review and approve the activation request for the *Contributor* role in case you find that Approvers show *None* or displays incorrect approvers.
+1. Select **Edit** to add the reviewers who must review and approve the activation request for the *Contributor* role in case you find that Approvers show *None* or display incorrect approver(s).
1. On the **Activation** tab, select **Require approval to activate** to add the approver(s) who must approve each request.
-1. Select security options, such as Multi Factor Authentication (MFA), Mandating ticket. to activate *Contributor* role.
+1. Select security options, such as Multi-Factor Authentication (MFA), Mandating ticket to activate *Contributor* role.
1. Select the appropriate options on **Assignment** and **Notification** tabs as per your requirement. :::image type="content" source="./media/multi-user-authorization/edit-role-settings.png" alt-text="Screenshot showing how to edit the role setting.":::
-1. Select **Update** to complete the set-up of approvers to activate *Contributor* role.
+1. Select **Update** to complete the setup of approvers to activate the *Contributor* role.
### Request activation of an eligible assignment to perform critical operations
Once the Backup admin raises a request for activating the Contributor role, the
To review and approve the request, follow these steps:
-1. In the security tenant, go to [Azure AD Privileged Identity Management.](../active-directory/privileged-identity-management/pim-configure.md).
+1. In the security tenant, go to [Azure AD Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
1. Go to **Approve Requests**. 1. Under **Azure resources**, you can see the request awaiting approval.
backup Quick Backup Azure Enable Enhanced Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-azure-enable-enhanced-soft-delete.md
+
+ Title: Quickstart - Enable enhanced soft delete for Azure Backup
+description: This quickstart describes how to enable enhanced soft delete for Azure Backup.
+ Last updated : 09/11/2023+++++
+# Quickstart: Enable enhanced soft delete in Azure Backup
+
+This quickstart describes how to enable enhanced soft delete to protect your data and recover backups, if they're deleted.
+
+[Enhanced soft delete](backup-azure-enhanced-soft-delete-about.md) provides an improvement to the [soft delete](backup-azure-security-feature-cloud.md) capability in Azure Backup that enables you to recover your backup data in case of accidental or malicious deletion. With enhanced soft delete, you get the ability to make soft delete always-on, thus protecting it from being disabled by any malicious actors. So, enhanced soft delete provides better protection for your backups against various threats. This feature also allows you to provide a customizable soft delete retention period for which soft deleted data must be retained.
+
+>[!Note]
+>Once you enable the *always-on* state for soft delete, you can't disable it for that vault.
+
+## Before you start
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults.
+- Enhanced soft delete applies to all vaulted workloads alike in Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, and Disk and VM snapshot backups.
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable the *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
+- There's no retention cost for the default soft delete duration of 14 days for vaulted backup, after which it incurs regular backup cost.
+
+## Enable soft delete with always-on state
+
+Soft delete is enabled by default for all new vaults you create. To make enabled settings irreversible, select **Enable Always-on Soft Delete**.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to **Recovery Services vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-inline.png" alt-text="Screenshot showing you how to open Soft Delete blade." lightbox="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties-blade-expanded.png":::
+
+ The soft delete settings for cloud and hybrid workloads are already enabled, unless you've explicitly disabled them earlier.
+
+1. If soft delete settings are disabled for any workload type in the **Soft Delete** blade, select the respective checkboxes to enable them.
+
+ >[!Note]
+ >Enabling soft delete for hybrid workloads also enables other security settings, such as Multi-factor authentication and alert notification for back up of workloads running in the on-premises servers.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >- There is no cost for soft delete for *14* days. However, deleted instances in soft delete state are charged if the soft delete retention period is *>14* days. Learn about [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+ >- Once configured, the soft delete retention period applies to all soft deleted instances of cloud and hybrid workloads in the vault.
+
+1. Select the **Enable Always-on Soft delete** checkbox to enable soft delete and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete.png" alt-text="Screenshot showing you how to enable a;ways-on state of soft delete.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to **Backup vault** > **Properties**.
+
+1. Under **Soft Delete**, select **Update** to modify the soft delete setting.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/open-soft-delete-properties.png" alt-text="Screenshot showing you how to open soft delete blade for Backup vault.":::
+
+ Soft delete is enabled by default with the checkboxes selected.
+
+1. If you've explicitly disabled soft delete for any workload type in the **Soft Delete** blade earlier, select the checkboxes to enable them.
+
+1. Choose the number of days between *14* and *180* to specify the soft delete retention period.
+
+ >[!Note]
+ >There is no cost for enabling soft delete for *14* days. However, you're charged for the soft delete instances if soft delete retention period is *>14* days. Learn about the [pricing details](backup-azure-enhanced-soft-delete-about.md#pricing).
+
+1. Select the **Enable Always-on Soft Delete** checkbox to enable soft delete always-on and make it irreversible.
+
+ :::image type="content" source="./media/backup-azure-enhanced-soft-delete/enable-always-on-soft-delete-backup-vault.png" alt-text="Screenshot showing you how to enable always-on state for Backup vault.":::
+
+ >[!Note]
+ >If you opt for *Enable Always-on Soft Delete*, select the *confirmation checkbox* to proceed. Once enabled, you can't disable the settings for this vault.
+
+1. Select **Update** to save the changes.
+++
+## Next steps
+
+- Learn more about [enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
+- Learn more about [soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup
-description: Learn about new features in Azure Backup.
+description: Learn about the new features in Azure Backup.
Previously updated : 09/14/2023 Last updated : 09/29/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - September 2023
+ - [Multi-user authorization using Resource Guard for Backup vault is now generally available](#multi-user-authorization-using-resource-guard-for-backup-vault-is-now-generally-available)
+ - [Enhanced soft delete for Azure Backup is now generally available](#enhanced-soft-delete-for-azure-backup-is-now-generally-available)
- [Support for selective disk backup with enhanced policy for Azure VM is now generally available](whats-new.md#support-for-selective-disk-backup-with-enhanced-policy-for-azure-vm-is-now-generally-available) - August 2023 - [Save your MARS backup passphrase securely to Azure Key Vault (preview)](#save-your-mars-backup-passphrase-securely-to-azure-key-vault-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Multi-user authorization using Resource Guard for Backup vault is now generally available
+
+Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Backup vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
+
+For more information, see [MUA for Backup vault](multi-user-authorization-concept.md?tabs=backup-vault).
+
+## Enhanced soft delete for Azure Backup is now generally available
+
+Enhanced soft delete provides improvements to the existing [soft delete](backup-azure-security-feature-cloud.md) feature. With enhanced soft delete, you now get the ability to make soft delete always-on, thus protecting it from being disabled by any malicious actors.
+
+You can also customize soft delete retention period (for which soft deleted data must be retained). Enhanced soft delete is available for Recovery Services vaults and Backup vaults.
+
+>[!Note]
+>Once you enable the *always-on* state for soft delete, you can't disable it for that vault.
+
+For more information, see [Enhanced soft delete for Azure Backup](backup-azure-enhanced-soft-delete-about.md).
## Save your MARS backup passphrase securely to Azure Key Vault (preview)
chaos-studio Chaos Studio Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-service-limits.md
Last updated 11/01/2021 -+ # Azure Chaos Studio Preview service limits
-This article provides service limits for Azure Chaos Studio Preview.
+This article provides service limits for Azure Chaos Studio Preview. For more information about Azure-wide service limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
## Experiment and target limits
-Chaos Studio applies limits to the number of objects, duration of activities, and retention of data.
+Chaos Studio applies limits to the number of resources, duration of activities, and retention of data.
-| Limit | Value |
-|--|--|
-| Actions per experiment | 9 |
-| Branches per experiment | 9 |
-| Steps per experiment | 4 |
-| Action duration (hours) | 12 |
-| Concurrent experiments executing per region and subscription | 5 |
-| Total experiment duration (hours) | 12 |
-| Number of experiments per region and subscription | 500 |
-| Number of targets per action | 50 |
-| Number of active agents per target | 1,000 |
-| Number of targets per region and subscription | 10,000 |
+| Limit | Value | Description |
+|--|--|--|
+| Actions per experiment | 9 | The maximum number of actions (such as faults or time delays) in an experiment. |
+| Branches per experiment | 9 | The maximum number of parallel tracks that can execute within an experiment. |
+| Steps per experiment | 4 | The maximum number of steps that execute in series within an experiment. |
+| Action duration (hours) | 12 | The maximum time duration of an individual action. |
+| Total experiment duration (hours) | 12 | The maximum duration of an individual experiment, including all actions. |
+| Concurrent experiments executing per region and subscription | 5 | The number of experiments that can run at the same time within a region and subscription. |
+| Experiment history retention time (days) | 120 | The time period after which individual results of experiment executions are automatically removed. |
+| Number of experiment resources per region and subscription | 500 | The maximum number of experiment resources a subscription can store in a given region. |
+| Number of targets per action | 50 | The maximum number of resources an individual action can target for execution. For example, the maximum Virtual Machines that can be shut down by a single Virtual Machine Shutdown fault. |
+| Number of agents per target | 1,000 | The maximum number of running that can be associated with a single target. For example, the agents running on all instances within a single Virtual Machine Scale Set. |
+| Number of targets per region and subscription | 10,000 | The maximum number of target resources within a single subscription and region. |
## API throttling limits
-Chaos Studio applies limits to all Azure Resource Manager operations. Requests made over the limit are throttled. All request limits are applied for a five-minute interval unless otherwise specified.
+Chaos Studio applies limits to all Azure Resource Manager operations. Requests made over the limit are throttled. All request limits are applied for a **five-minute interval** unless otherwise specified. For more information about Azure Resource Manager requests, see [Throttling Resource Manager requests](../azure-resource-manager/management/request-limits-and-throttling.md).
| Operation | Requests | |--|--|
Chaos Studio applies limits to all Azure Resource Manager operations. Requests m
| Microsoft.Chaos/targets/capabilities/delete | 600 | | Microsoft.Chaos/locations/targetTypes/read | 50 | | Microsoft.Chaos/locations/targetTypes/capabilityTypes/read | 50 |+
chaos-studio Chaos Studio Set Up App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-set-up-app-insights.md
+
+ Title: Set up App Insights for a Chaos Studio agent-based experiment
+description: Understand the steps to connect App Insights to your Chaos Studio Agent-Based Experiment
+++ Last updated : 09/27/2023++++
+# How-to: Configure your experiment to emit Experiment Fault Events to App Insights
+In this guide, we'll show you the steps needed to configure a Chaos Studio **Agent-based** Experiment to emit telemetry to App Insights. These events show the start and stop of each fault as well as the type of fault executed and the resource the fault was executed against. App Insights is the primary recommended logging solution for **Agent-based** experiments in Chaos Studio.
+
+## Prerequisites
+- An Azure subscription
+- An existing Chaos Studio [**Agent-based** Experiment](chaos-studio-tutorial-agent-based-portal.md)
+- [Required for Application Insights Resource as well] An existing [Log Analytics Workspace](../azure-monitor/logs/quick-create-workspace.md)
+- An existing [Application Insights Resource](../azure-monitor/app/create-workspace-resource.md)
+- [Required for Agent-based Chaos Experiments] A [User-Assigned Managed Identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)
+
+## Step 1: Copy the Instrumentation Key from your Application Insights Resource
+Once you have met all the prerequisite steps, copy the **Instrumentation Key** found in the overview page of your Application Insights Resource (see screenshot)
+
+<br/>
+
+[![Screenshot that shows Instrumentation Key in App Insights.](images/step-1a-app-insights.png)](images/step-1a-app-insights.png#lightbox)
+
+## Step 2: Enable the Target Platform for your Agent-Based Fault with Application Insights
+Navigate to the Chaos Studio overview page and click on the **Targets** blade under the "Experiments Management" section. Find the target platform, ensure it's enabled for agent-based faults, and select "Manage Actions" in the right-most column. See screenshot below for an example:
+<br/>
+
+<br/>
+
+[![Screenshot that shows the Chaos Targets Page.](images/step-2a-app-insights.png)](images/step-2a-app-insights.png#lightbox)
+
+## Step 3: Add your Application Insights account and Instrumentation key
+At this point, the resource configuration page seen in the screenshot should come up . After configuring your managed identity, make sure Application Insights is "Enabled" and then select your desired Application Insights Account and enter the Instrumentation Key you copied in Step 1. Once you have filled out the required information, you can click "Review+Create" to deploy your resource.
+
+<br/>
+
+[![Screenshot of Targets Deployment Page.](images/step-3a-app-insights.png)](images/step-3a-app-insights.png#lightbox)
+
+## Step 4: Run the chaos experiment
+At this point, your Chaos Target is now configured to emit telemetry to the App Insights Resource you configured! If you navigate to your specific Application Insights Resource and open the "Logs" blade under the "Monitoring" section, you should see the Agent health status and any actions the Agent is taking on your Target Platform. You can now run your experiment and see logging in your Application Insights Resource. See screenshot for example of App Insights Resource running successfully on an Agent-based Chaos Target platform.
+
+<br/>
+
+To query your logs, navigate to the "Logs" tab in the Application Insights Resource to get your desired logging information your desired format.
+
+<br/>
+
+[![Screenshot of Logs tab in Application Insights Resource.](images/step-4a-app-insights.png)](images/step-4a-app-insights.png#lightbox)
chaos-studio Chaos Studio Set Up Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-set-up-azure-monitor.md
+
+ Title: Set up Azure monitor for a Chaos Studio experiment
+description: Understand the steps to connect Azure Monitor to your Chaos Studio Experiment
+++ Last updated : 09/27/2023++++
+# How-to: Configure your experiment to emit Experiment Fault Events to Azure Monitor
+In this guide, we'll show you the steps needed to integrate an Experiment to emit telemetry to Azure Monitor. These events show the start and stop of each fault as well as the type of fault executed and the resource the fault was executed against. You can overlay this data on top of your existing Azure Monitor or external monitoring dashboards.
+
+## Prerequisites
+- An Azure subscription
+- An existing Chaos Studio Experiment [How to create your first Chaos Experiment](chaos-studio-quickstart-azure-portal.md)
+- An existing Log Analytics Workspace [How to Create a Log Analytics Workspace](../azure-monitor/logs/quick-create-workspace.md)
+
+## Step 1: Navigate to Diagnostic Settings tab in your Chaos Experiment
+Navigate to the Chaos Experiment you want to emit telemetry to Azure Monitor and open it. Then navigate to the "Diagnostic settings" tab under the "Monitoring" section as shown in the below screenshot:
+
+<br/>
+
+[![Screenshot that shows Diagnostic Settings in Chaos Experiment.](images/step-1a.png)](images/step-1a.png#lightbox)
+
+## Step 2: Connect your Chaos Experiment to your desired Log Analytics Workspace
+Once you are in the "Diagnostic Settings" tab within your Chaos Experiment, select "Add Diagnostic Setting."
+Enter the following details:
+1. **Diagnostic Setting Name**: Any String you want, much like a Resource Group Name
+2. **Category Groups**:Choose which category of logging you want to output to the Log Analytics workspace.
+3. **Subscription**: The subscription which includes the Log Analytics Workspace you would like to use
+4. **Log Analytics Workspace**: Where you'll select your desired Log Analytics Workspace
+<br/>
+All the other settings are optional
+<br/>
+
+<br/>
+
+[![Screenshot that shows the Diagnostic Settings blade and required information.](images/step-2a.png)](images/step-2a.png#lightbox)
+
+## Step 3: Run the chaos experiment
+Once you have completed Step 2, your experiment is now configured to emit telemetry to Azure Monitor upon the next Chaos Experiment execution! It typically takes time (20 minutes) for the logs to populate. Once populated you can view the log events from the logs tab. Events include experiment start, stop, and details about the faults executed. You can even turn the logs into chart visualizations or overlay your existing live site visualizations with chaos metadata.
+
+<br/>
+
+To query your logs, navigate to the "Logs" tab in your Chaos Experiment Resource to get your desired logging information in your desired format.
+
+<br/>
+
+[![Screenshot of Logs tab in Chaos Experiment Resource.](images/step-3a.png)](images/step-3a.png#lightbox)
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
description: Walkthrough of how Azure Cloud Shell persists files. ms.contributor: jahelmic Previously updated : 04/25/2023 Last updated : 09/29/2023 tags: azure-resource-manager
This fileshare is used for both Bash and PowerShell.
## Use existing resources
-Using the advanced option, you can associate existing resources. When selecting a Cloud Shell region,
-you must select a backing storage account co-located in the same region. For example, if your
-assigned region is West US then you must associate a fileshare that resides within West US as well.
-
-When the storage setup prompt appears, select **Show advanced settings** to view more options. The
-populated storage options filter for locally redundant storage (LRS), geo-redundant storage (GRS),
-and zone-redundant storage (ZRS) accounts.
+Using the advanced option, you can associate existing resources. When the storage setup prompt
+appears, select **Show advanced settings** to view more options. The populated storage options
+filter for locally redundant storage (LRS), geo-redundant storage (GRS), and zone-redundant storage
+(ZRS) accounts.
> [!NOTE] > Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file
Cloud Shell machines exist in the following regions:
| Europe | North Europe, West Europe | | Asia Pacific | India Central, Southeast Asia |
-Customers should choose a primary region, unless they have a requirement that their data at rest be
-stored in a particular region. If they have such a requirement, a secondary storage region should be
-used.
+You should choose a region that meets your requirements.
### Secondary storage regions
of their fileshare.
## Restrict resource creation with an Azure resource policy
-Storage accounts that you create in Cloud Shell are tagged with
-`ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts
-in Cloud Shell, create an [Azure resource policy for tags][02] that is triggered by this specific
-tag.
+Storage accounts that created in Cloud Shell are tagged with `ms-resource-usage:azure-cloud-shell`.
+If you want to disallow users from creating storage accounts in Cloud Shell, create an
+[Azure resource policy for tags][02] that's triggered by this specific tag.
## How Cloud Shell storage works
cloud-shell Quickstart Deploy Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md
description: This article provides step-by-step instructions to deploy Azure Cloud Shell in a private virtual network. ms.contributor: jahelmic Previously updated : 06/29/2023 Last updated : 09/29/2023 Title: Deploy Azure Cloud Shell in a VNET with quickstart templates
+ Title: Deploy Azure Cloud Shell in a virtual network with quickstart templates
-# Deploy Azure Cloud Shell in a VNET with quickstart templates
+# Deploy Azure Cloud Shell in a virtual network with quickstart templates
-Before you can deploy Azure Cloud Shell in a virtual network (VNET) configuration using the
+Before you can deploy Azure Cloud Shell in a virtual network (VNet) configuration using the
quickstart templates, there are several prerequisites to complete before running the templates. This document guides you through the process to complete the configuration.
-## Steps to deploy Azure Cloud Shell in a VNET
+## Steps to deploy Azure Cloud Shell in a virtual network
-This article walks you through the following steps to deploy Azure Cloud Shell in a VNET:
+This article walks you through the following steps to deploy Azure Cloud Shell in a virtual network:
1. Collect the required information
-1. Provision the virtual networks using the **Azure Cloud Shell - VNet** ARM template
-1. Provision the VNET storage account using the **Azure Cloud Shell - VNet storage** ARM template
-1. Configure and use Azure Cloud Shell in a VNET
+1. Create the virtual networks using the **Azure Cloud Shell - VNet** ARM template
+1. Create the virtual network storage account using the **Azure Cloud Shell - VNet storage** ARM template
+1. Configure and use Azure Cloud Shell in a virtual network
## 1. Collect the required information There are several pieces of information that you need to collect before you can deploy Azure Cloud. You can use the default Azure Cloud Shell instance to gather the required information and create the
-necessary resources. You should create dedicated resources for the Azure Cloud Shell VNET
+necessary resources. You should create dedicated resources for the Azure Cloud Shell VNet
deployment. All resources must be in the same Azure region and contained in the same resource group. - **Subscription** - The name of your subscription containing the resource group used for the Azure
- Cloud Shell VNET deployment
-- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNET deployment
+ Cloud Shell VNet deployment
+- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNet deployment
- **Region** - The location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNET
+- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNet
- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group - **Azure Relay Namespace** - The name that you want to assign to the Relay resource created by the template
Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
### Azure Container Instance ID
-To configure the VNET for Cloud Shell using the quickstarts, retrieve the `Azure Container Instance`
+To configure the virtual network for Cloud Shell using the quickstarts, retrieve the `Azure Container Instance`
ID for your organization. ```powershell
Azure Container Instance Service 8fe7fd25-33fe-4f89-ade3-0e705fcf4370 34fbe509-d
Take note of the **Id** value for the `Azure Container Instance` service principal. It's needed for the **Azure Cloud Shell - VNet storage** template.
-## 2. Provision the virtual network using the ARM template
+## 2. Create the virtual network using the ARM template
Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual network. The template creates three subnets under the virtual network created earlier. You may choose to change the supplied names of the subnets or use the defaults. The virtual network, along
-with the subnets, require valid IP address assignments.
+with the subnets, require valid IP address assignments. You need at least one IP address for the
+Relay subnet and enough IP addresses in the container subnet to support the number of concurrent
+sessions you expect to use.
The ARM template requires specific information about the resources you created earlier, along with naming information for new resources. This information is filled out along with the prefilled
information in the form.
Information needed for the template: - **Subscription** - The name of your subscription containing the resource group for Azure Cloud
- Shell VNET
+ Shell VNet
- **Resource Group** - The resource group name of either an existing or newly created resource group - **Region** - Location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNET
+- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell virtual network
- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group Fill out the form with the following information:
Fill out the form with the following information:
| Instance details | Value | | - | - | | Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
-| Existing VNET Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. |
+| Existing virtual network Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. |
| Relay Namespace Name | Create a name that you want to assign to the Relay resource created by the template.<br>For this example, we're using `arn-cloudshell-eastus`. | | Azure Container Instance OID | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. | | Container Subnet Name | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
Fill out the form with the following information:
Once the form is complete, select **Review + Create** and deploy the network ARM template to your subscription.
-## 3. Provision the VNET storage using the ARM template
+## 3. Create the virtual network storage using the ARM template
Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual
-network. The template creates the storage account and assigns it to the private VNET.
+network. The template creates the storage account and assigns it to the private virtual network.
The ARM template requires specific information about the resources you created earlier, along with naming information for new resources.
with naming information for new resources.
Information needed for the template: - **Subscription** - The name of the subscription containing the resource group for Azure Cloud
- Shell VNET.
+ Shell virtual network.
- **Resource Group** - The resource group name of either an existing or newly created resource group - **Region** - Location of the resource group-- **Existing VNET name** - The name of the virtual network created earlier
+- **Existing virtual network name** - The name of the virtual network created earlier
- **Existing Storage Subnet Name** - The name of the storage subnet created with the Network quickstart template - **Existing Container Subnet Name** - The name of the container subnet created with the Network
Fill out the form with the following information:
| Instance details | Value | | | | | Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
-| Existing VNET Name | For this example, we're using `vnet-cloudshell-eastus`. |
+| Existing virtual network Name | For this example, we're using `vnet-cloudshell-eastus`. |
| Existing Storage Subnet Name | Fill in the name of the resource created by the network template. | | Existing Container Subnet Name | Fill in the name of the resource created by the network template. | | Storage Account Name | Create a name for the new storage account.<br>For this example, we're using `myvnetstorage1138`. |
subscription.
## 4. Configuring Cloud Shell to use a virtual network
-After deploying your private Cloud Shell instance, each Cloud Shell user must change their
+After you have deployed your private Cloud Shell instance, each Cloud Shell user must change their
configuration to use the new private instance. If you have used the default Cloud Shell before deploying the private instance, you must reset your
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
description: This article covers troubleshooting Cloud Shell common scenarios. ms.contributor: jahelmic Previously updated : 05/03/2023 Last updated : 09/29/2023 tags: azure-resource-manager
Azure Cloud Shell has the following known limitations:
### Quota limitations
-Azure Cloud Shell has a limit of 20 concurrent users per tenant per region. Opening more than 20
-simultaneous sessions produces a "Tenant User Over Quota" error. If you have a legitimate need to
-have more than 20 sessions open, such as for training sessions, contact Support to request a quota
-increase before your anticipated usage.
+Azure Cloud Shell has a limit of 20 concurrent users per tenant. Opening more than 20 simultaneous
+sessions produces a "Tenant User Over Quota" error. If you have a legitimate need to have more than
+20 sessions open, such as for training sessions, contact Support to request a quota increase before
+your anticipated usage.
Cloud Shell is provided as a free service for managing your Azure environment. It's not as a general purpose computing platform. Excessive automated usage may be considered in breach to the Azure Terms
considerations include:
- With mounted storage, only modifications within the `clouddrive` directory are persisted. In Bash, your `$HOME` directory is also persisted.-- Azure fileshares can be mounted only from within your [assigned region][05].
- - In Bash, run `env` to find your region set as `ACC_LOCATION`.
- Azure Files supports only locally redundant storage and geo-redundant storage accounts. ### Browser support
Azure Cloud Shell in Azure Government is only accessible through the Azure porta
<!-- link references --> [04]: https://docs.docker.com/desktop/
-[05]: persisting-shell-storage.md#mount-a-new-clouddrive
[06]: /powershell/microsoftgraph/migration-steps
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
As part of this preview, the Azure Communication Services SDKs can be used to bu
To enable calling and chat between your Communication Services users and Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource. --
-## Enable interoperability in your Teams tenant
-Azure AD user with [Teams administrator role](../../../active-directory/roles/permissions-reference.md#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant.
-
-### 1. Prepare the Microsoft Teams module
-
-First, open the PowerShell and validate the existence of the Teams module with the following command:
-
-```script
-Get-module *teams*
-```
-
-If you don't see the `MicrosoftTeams` module, install it first. To install the module, you need to run PowerShell as an administrator. Then run the following command:
-
-```script
- Install-Module -Name MicrosoftTeams
-```
-
-You'll be informed about the modules that will be installed, which you can confirm with a `Y` or `A` answer. If the module is installed but is outdated, you can run the following command to update the module:
-
-```script
- Update-Module MicrosoftTeams
-```
-
-### 2. Connect to Microsoft Teams module
-
-When the module is installed and ready, you can connect to the MicrosftTeams module with the following command. You'll be prompted with an interactive window to log in. The user account that you're going to use needs to have Teams administrator permissions. Otherwise, you might get an `access denied` response in the next steps.
-
-```script
-Connect-MicrosoftTeams
-```
-
-### 3. Enable tenant configuration
-
-Interoperability with Communication Services resources is controlled via tenant configuration and assigned policy. Teams tenant has a single tenant configuration, and Teams users have assigned global policy or custom policy. The following table shows possible scenarios and impacts on interoperability.
-
-| Tenant configuration | Global policy | Custom policy | Assigned policy | Interoperability |
-| | | | | |
-| True | True | True | Global | **Enabled** |
-| True | True | True | Custom | **Enabled** |
-| True | True | False | Global | **Enabled** |
-| True | True | False | Custom | Disabled |
-| True | False | True | Global | Disabled |
-| True | False | True | Custom | **Enabled** |
-| True | False | False | Global | Disabled |
-| True | False | False | Custom | Disabled |
-| False | True | True | Global | Disabled |
-| False | True | True | Custom | Disabled |
-| False | True | False | Global | Disabled |
-| False | True | False | Custom | Disabled |
-| False | False | True | Global | Disabled |
-| False | False | True | Custom | Disabled |
-| False | False | False | Global | Disabled |
-| False | False | False | Custom | Disabled |
-
-After successful login, you can run the cmdlet [Set-CsTeamsAcsFederationConfiguration](/powershell/module/teams/set-csteamsacsfederationconfiguration) to enable Communication Services resource in your tenant. Replace the text `IMMUTABLE_RESOURCE_ID` with an immutable resource ID in your communication resource. You can find more details on how to get this information [here](../troubleshooting-info.md#getting-immutable-resource-id).
-
-```script
-$allowlist = @('IMMUTABLE_RESOURCE_ID')
-Set-CsTeamsAcsFederationConfiguration -EnableAcsUsers $True -AllowedAcsResources $allowlist
-```
-
-### 4. Enable tenant policy
-
-Each Teams user has assigned an `External Access Policy` that determines whether Communication Services users can call this Teams user. Use cmdlet
-[Set-CsExternalAccessPolicy](/powershell/module/skype/set-csexternalaccesspolicy) to ensure that the policy assigned to the Teams user has set `EnableAcsFederationAccess` to `$true`
-
-```script
-Set-CsExternalAccessPolicy -Identity Global -EnableAcsFederationAccess $true
-```
-- ## Get Teams user ID To start a call or chat with a Teams user or Teams Voice application, you need an identifier of the target. You have the following options to retrieve the ID:
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
Last updated 03/01/2023
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Spotlight states
-In this article, you'll learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone.
-
+In this article, you learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone.
Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight. ++++ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Since the video stream resolution of a participant is increased when spotlighted
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
-Communication Services or Microsoft 365 users can call the spotlight APIs based on role type and conversation type
-
-**In a one to one call or group call scenario, the following APIs are supported for both Communication Services and Microsoft 365 users**
-|APIs| Organizer | Presenter | Attendee |
-|-|--|--|--|
-| startSpotlight | ✔️ | ✔️ | ✔️ |
-| stopSpotlight | ✔️ | ✔️ | ✔️ |
-| stopAllSpotlight | ✔️ | ✔️ | ✔️ |
-| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
-**For meeting scenario the following APIs are supported for both Communication Services and Microsoft 365 users**
-|APIs| Organizer | Presenter | Attendee |
-|-|--|--|--|
-| startSpotlight | ✔️ | ✔️ | |
-| stopSpotlight | ✔️ | ✔️ | ✔️ |
-| stopAllSpotlight | ✔️ | ✔️ | |
-| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
## Next steps - [Learn how to manage calls](./manage-calls.md)
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
In this quickstart you are going to learn how to start a call from Azure Communi
If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling). + ## Create or select Teams Auto Attendant Teams Auto Attendant is system that provides an automated call handling system for incoming calls. It serves as a virtual receptionist, allowing callers to be automatically routed to the appropriate person or department without the need for a human operator. You can select existing or create new Auto Attendant via [Teams Admin Center](https://aka.ms/teamsadmincenter).
In results we'll are able to find "ID" field
"id": "31a011c2-2672-4dd0-b6f9-9334ef4999db" ``` ## Clean up resources
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
In this quickstart you are going to learn how to start a call from Azure Communi
If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling). + ## Create or select Teams Call Queue Teams Call Queue is a feature in Microsoft Teams that efficiently distributes incoming calls among a group of designated users or agents. It's useful for customer support or call center scenarios. Calls are placed in a queue and assigned to the next available agent based on a predetermined routing method. Agents receive notifications and can handle calls using Teams' call controls. The feature offers reporting and analytics for performance tracking. It simplifies call handling, ensures a consistent customer experience, and optimizes agent productivity. You can select existing or create new Call Queue via [Teams Admin Center](https://aka.ms/teamsadmincenter).
communication-services Contact Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md
+
+ Title: Contact centers with Azure Communication Services
+description: Learn concepts for contact center apps
+++++ Last updated : 09/27/2023+++++
+# Contact center
+
+This tutorial describes concepts for **contact center** applications. After completing this you'll understand common use cases that a contact center application delivers, the Microsoft technologies that can help you build those uses cases and have built a sample application integrating Microsoft Teams and Azure that you can use to demo and explore further.
+
+Contact center applications are focused on unscheduled communication between **consumers** and **agents**. The **organizational boundary** between consumers and agents, and the **unscheduled** nature of the interaction, are key attributes of contact center applications.
+
+Azure and Teams are interoperable. This interoperability gives organizations choice in how they interact with customers using the Microsoft Cloud. Three examples include:
+
+- **Teams Phone** provides a zero-code suite for customer communication using [Teams Cloud Auto attendants and Call queues](/microsoftteams/plan-auto-attendant-call-queue) and [Click-to-call](https://techcommunity.microsoft.com/t5/microsoft-teams-blog/what-s-new-in-microsoft-teams-at-enterprise-connect-2023/ba-p/3774374).
+- **Teams + Azure hybrid.** Combine Teams with a custom Azure application to manage or route communication, or for a custom consumer or agent experience. This document currently focuses on these scenarios.
+- **Azure custom.** Build the entire customer engagement experience on Azure primitives ΓÇô the business experience, the consumer experience, the job routing, and the intelligent insights. Azure Communication Services provides several products for custom solutions such as:
+ - [Call Automation](/azure/communication-services/concepts/call-automation/call-automation-teams-interop) ΓÇô Build AI assisted programmable calling workflows
+ - [Job Router](/azure/communication-services/concepts/router/concepts) ΓÇô Match jobs to the most suitable worker
+ - [UI Library](/azure/communication-services/concepts/ui-library/ui-library-overview?pivots=platform-web) ΓÇô Develop custom web and mobile experiences for end users
+
+Developers interested in scheduled business-to-consumer interactions should read our [Virtual Visits](/azure/communication-services/tutorials/virtual-visits) tutorial. This article focuses on *inbound* engagement, where the consumer initiates communication. Many businesses also have *outbound* communication needs, for which we recommend the outbound customer engagement tutorial.
+
+The term ΓÇ£contact centerΓÇ¥ captures a large family of applications diverse across scale, channels, and organizational approach:
+
+- **Scale**. Small businesses may have a small number of employees operating as agents in a limited role, for example a restaurant offering a phone number for reservations. While an airline may have thousands of employees and vendors providing a 24/7 contact center.
+- **Channel**. Organizations can reach consumers through the phone system, apps, SMS, or consumer communication platforms such as WhatsApp.
+- **Organizational approach**. Most businesses have employees operate as agents using Teams or a licensed contact center as a service software (CCaaS). Other businesses may out-source the agent role or use specialized service providers who fully operate contact centers as a service.
+
+## User Personas
+
+No matter the industry, there are at least five personas involved in a contact center and certain tasks they accomplish:
+
+- **Designer**. The designer defines the consumer experience. What consumer questions, interactions, and needs does the contact center solve for? What channels are used? How is the consumer routed to different agent pools using bots or interactive voice response?
+- **Shift Manager**. The shift manager organizes agents. They monitor consumer satisfaction and other business outcomes.
+- **Agent**. The human being who engages consumers.
+- **Expert**. A human being to whom agents escalate
+- **Consumer**. The human being, external to the organization, that initiates communication. Some companies operate internal contact centers, for example an IT support organization that receives requests from users (consumers).
+
+The rest of this article provides the high-level architecture and data flows for two different contact center designs:
+
+1. Consumers going to a website (or mobile app), talking to a chat bot, and then starting a voice call answered by a Teams-hosted agent.
+2. Consumers initializing a voice interaction by calling a phone number from an organizationΓÇÖs TeamΓÇÖs phone system.
+
+These examples build on each other in increasing complexity. GitHub and the Azure Communication Services Sample Builder host sample code that match these simplified architectures.
+
+## Chat on a website with a bot agent
+
+Communication Services Chat applications can be integrated with an Azure Bot Service. The Bot Service needs to be linked to a Communication Services resource using a channel in the Azure Portal. To learn more about this scenario, see [Add a bot to your chat app - An Azure Communication Services quickstart](/azure/communication-services/quickstarts/chat/quickstart-botframework-integration).
+
+![Data flow diagram for chat with a bot agent](media/contact-center/data-flow-diagram-chat-bot.png)
+
+### Dataflow
+
+1. An Azure Communication Services Chat channel is connected to an Azure Bot Service in Azure Portal by an administrator.
+2. A user clicks a widget in a client application to contact an agent.
+3. The Contact Center Service creates a Chat thread and adds the user ID for the bot to the thread.
+4. A user sends and receives messages to the bot using the Azure Communication Services Chat SDK.
+5. The bot sends and receives messages to the user using the Azure Communication Services Chat Channel.
+
+## Chat on a website that escalates to a voice call answered by a Teams agent
+
+A conversation between a user and a bot can be handed off to an agent in Teams. Optionally, a Teams Voice App such as an Auto Attendant or Call Queue can control the transition. To learn more about bot handoff integration models, see [Transition conversations from bot to human - Bot Service](/azure/bot-service/bot-service-design-pattern-handoff-human?view=azure-bot-service-4.0). To learn more about Teams Auto Attendants and Call Queues, see [Plan for Teams Auto attendants and Call queues - Microsoft Teams](/microsoftteams/plan-auto-attendant-call-queue).
+
+![Data flow diagram for chat escalating to a call](media/contact-center/data-flow-diagram-escalate-to-call.png)
+
+### Dataflow
+
+1. A user clicks a widget in the client application to contact an agent.
+2. The Contact Center Service creates a Chat thread and adds an Azure Bot to the thread.
+3. The user interacts with the Azure Bot by sending and receiving Chat messages.
+4. The Contact Center Service hands the user off to a Teams Call Queue or Auto Attendant.
+5. The Teams Voice Apps hands the user off to an employee acting as an agent using Teams. The user and the employee interact using audio, video, and screenshare.
+
+### Detailed capabilities
+
+The following list presents the set of features that are currently available for contact centers in Azure Communication Services. For detailed capability information, see [Azure Communication Services Calling SDK overview](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features). Azure Communication Services Calling to Teams, including Teams Auto Attendant and Call Queue, requires setup to be completed as described in [Teams calling and chat interoperability](/azure/communication-services/concepts/interop/calling-chat).
+
+| Group of features | Capability | Public preview | General availability |
+|-|-|-|-|
+| DTMF Support in ACS UI SDK | Allows touch tone entry | ❌ | ✔️ |
+| Calling Capabilities | Audio and video | ✔️ | ✔️ |
+| | Screen sharing | ✔️ | ✔️ |
+| | Record the call | ✔️ | ✔️ |
+| | Park the call | ❌ | ❌ |
+| | Personal voicemail | ❌ | ✔️ |
+| Teams Auto Attendant | Answer call | ✔️ | ✔️ |
+| | Operator routing | ❌ | ✔️ |
+| | Speech recognition of menu options | ✔️1 | ✔️1 |
+| | Speech recognition of directory search | ✔️1 | ✔️1 |
+| | Power BI Reporting | ❌ | ✔️ |
+| Auto Attendant Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ✔️ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+| Teams Call Queue | Music on hold | ✔️ | ✔️ |
+| | Answer call | ✔️ | ✔️ |
+| | Power BI Reporting | ❌ | ✔️ |
+| Overflow Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ❌ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+| Timeout Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ❌ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+| No Agents Redirects | Disconnect | ✔️ | ✔️ |
+| | Person in org | ❌ | ✔️2 |
+| | AA or CQ | ❌ | ✔️ |
+| | External | ❌ | ✔️2 |
+| | Shared voicemail | ❌ | ✔️ |
+
+1. Teams Auto Attendant must be voice enabled
+2. Licensing required
+
+### Additional Resources
+
+- [Teams calling and chat interoperability - An Azure Communication Services concept document](/azure/communication-services/concepts/interop/calling-chat)
+- [Quickstart: Join your calling app to a Teams call queue](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue)
+- [Quickstart - Teams Auto Attendant on Azure Communication Services](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant)
+- [Get started with a click to call experience using Azure Communication Services - An Azure Communication Services tutorial](/azure/communication-services/tutorials/calling-widget/calling-widget-overview)
+
+## Extend your contact center voice solution to Teams users
+
+Improve the efficiency of your contact center operations by inviting subject matter experts into your customer service workflows. With Azure Communication Services Call Automation API, developers can add subject matter experts, who use Microsoft Teams, to existing customer service calls to provide expert advice and help agents improve their first call resolution rate.
+This interoperability is offered over VoIP and makes it easy for developers to implement per-region multi-tenant trunks that maximize value and reduce telephony infrastructure overhead.
+
+![Data flow diagram for adding a Teams user to a call](media/contact-center/data-flow-diagram-add-teams-user-to-call.png)
+To learn more about Call Automation API and how a contact center can leverage this interoperability with Teams, see [Deliver expedient customer service by adding Microsoft Teams users in Call Automation workflows](/azure/communication-services/concepts/call-automation/call-automation-teams-interop).
+
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
User Defined Routes (UDR) and controlled egress through NAT Gateway are supporte
- Configuring UDR is done outside of the Container Apps environment scope. -- UDR isn't supported for external environments.- :::image type="content" source="media/networking/udr-architecture.png" alt-text="Diagram of how UDR is implemented for Container Apps."::: Azure creates a default route table for your virtual networks upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. For example, you can create a UDR that routes all traffic to the firewall.
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
For more information on networking concepts in Container Apps, see [Networking E
## Prerequisites
-* **Internal environment**: An internal container app environment on the workload profiles environment that's integrated with a custom virtual network. When you create an internal container app environment, your container app environment has no public IP addresses, and all traffic is routed through the virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles environment](./workload-profiles-manage-cli.md).
+* **Workload profiles environment**: A workload profiles environment that's integrated with a custom virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles environment](./workload-profiles-manage-cli.md?pivots=aca-vnet-custom).
* **`curl` support**: Your container app must have a container that supports `curl` commands. In this how-to, you use `curl` to verify the container app is deployed correctly. If you don't have a container app with `curl` deployed, you can deploy the following container which supports `curl`, `mcr.microsoft.com/k8se/quickstart:latest`.
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
Deploy a default Ubuntu Azure virtual machine with [az vm create][az-vm-create].
az vm create \ --resource-group myResourceGroup \ --name myDockerVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
This article is about role-based access control for data plane operations in Azure Cosmos DB for MongoDB.
-If you are using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
+If you're using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
Azure Cosmos DB for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users and roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or Azure Resource Manager (ARM). ## Concepts ### Resource
-A resource is a collection or database to which we are applying access control rules.
+A resource is a collection or database to which we're applying access control rules.
### Privileges Privileges are actions that can be performed on a specific resource. For example, "read access to collection xyz". Privileges are assigned to a specific role.
Privileges are actions that can be performed on a specific resource. For example
A role has one or more privileges. Roles are assigned to users (zero or more) to enable them to perform the actions defined in those privileges. Roles are stored within a single database. ### Diagnostic log auditing
-An additional column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column will identify which user performed which data plan operation. The value in this column is empty when RBAC is not enabled.
+An another column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column identifies which user performed which data plan operation. The value in this column is empty when RBAC isn't enabled.
## Available Privileges #### Query and Write
An additional column called `userId` has been added to the `MongoRequests` table
* listIndexes ## Built-in Roles
-These roles already exist on every database and do not need to be created.
+These roles already exist on every database and don't need to be created.
### read Has the following privileges: changeStream, collStats, find, killCursors, listIndexes, listCollections
az cloud set -n AzureCloud
az login az account set --subscription <your subscription ID> ```
-3. Enable the RBAC capability on your existing API for MongoDB database account. You'll need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account. RBAC can also be enabled via the features tab in the Azure portal instead.
+3. Enable the RBAC capability on your existing API for MongoDB database account. You need to [add the capability](how-to-configure-capabilities.md) "EnableMongoRoleBasedAccessControl" to your database account. RBAC can also be enabled via the features tab in the Azure portal instead.
If you prefer a new database account instead, create a new database account with the RBAC capability set to true. ```powershell az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl
az cosmosdb mongodb user definition delete --account-name <account-name> --resou
- The number of users and roles you can create must equal less than 10,000. - The commands listCollections, listDatabases, killCursors, and currentOp are excluded from RBAC.-- Users and Roles across databases are not supported.
+- Users and Roles across databases aren't supported.
- A user's password can only be set/reset by through the Azure CLI / Azure PowerShell. - Configuring Users and Roles is only supported through Azure CLI / PowerShell. -- Disabling primary/secondary key authentication is not supported. We recommend rotating your keys to prevent access when enabling RBAC.
+- Disabling primary/secondary key authentication isn't supported. We recommend rotating your keys to prevent access when enabling RBAC.
+- RBAC policies for Cosmos DB for Mongo DB RU won't be automatically reinstated following a restore operation. You'll be required to reconfigure these policies after the restoration process is complete.
## Frequently asked questions (FAQs) ### Is it possible to manage role definitions and role assignments from the Azure portal?
-Azure portal support for role management is not available. However, RBAC can be enabled via the features tab in the Azure portal.
+Azure portal support for role management isn't available. However, RBAC can be enabled via the features tab in the Azure portal.
### How do I change a user's password?
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 09/25/2023 Last updated : 09/27/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that change cluster internals, such as installing a [new minor PostgreSQ
### September 2023
+* General availability: [PostgreSQL 16](https://www.postgresql.org/docs/release/16.0/) support.
+ * See all supported PostgreSQL versions [here](./reference-versions.md#postgresql-versions).
+ * [Upgrade to PostgreSQL 16](./howto-upgrade.md)
+* General availability: [Citus 12.1 with new features and PostgreSQL 16 support](https://www.citusdata.com/updates/v12-1).
* General availability: Data Encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) is now supported for all available regions. * See [this guide](./how-to-customer-managed-keys.md) for the steps to enable data encryption using customer managed keys. * Preview: Geo-redundant backup and restore
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
* [Azure Active Directory (Azure AD) authentication](./concepts-authentication.md#azure-active-directory-authentication-preview) * [Azure CLI support for Azure Cosmos DB for PostgreSQL](/cli/azure/cosmosdb/postgres) * Azure SDKs: [.NET](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDBForPostgreSql/1.0.0-beta.1), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/cosmosforpostgresql/armcosmosforpostgresql@v0.1.0), [Java](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmosdbforpostgresql/1.0.0-beta.1/overview), [JavaScript](https://www.npmjs.com/package/@azure/arm-cosmosdbforpostgresql/v/1.0.0-beta.1), and [Python](https://pypi.org/project/azure-mgmt-cosmosdbforpostgresql/1.0.0b1/)
-* [Data encryption at rest using customer managed keys](./concepts-customer-managed-keys.md)
* [Database audit with pgAudit](./how-to-enable-audit.md) ## Contact us
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 08/24/2023 Last updated : 09/27/2023 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
Azure Cosmos DB for PostgreSQL currently supports a subset of key extensions as
The following tables list the standard PostgreSQL extensions that are supported on Azure Cosmos DB for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
-The versions of each extension installed in a cluster sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+The versions of each extension installed in a cluster sometimes differ based on the version of PostgreSQL (11, 12, 13, 14, 15, or 16). The tables list extension versions per database version.
### Citus extension > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.12 | 10.2.9 | 11.3.0 | 12.0.0 | 12.0.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5 | 10.2 | 11.3 | 12.1 | 12.1 | 12.1 |
### Data types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 |
-> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 | 1.5 |
-> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 | 2.16 |
-> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 | 1.8 |
-> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 |
-> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 | 1.5 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.18 | 2.18 | 2.18 | 2.18 | 2.18 | 2.18 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 | 1.8 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 |
### Full-text search extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Functions extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 |
-> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 |
-> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 |
-> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 |
-> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 15** |
+> |||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.2 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 |
+> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 | 1.0 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | | |
-> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Index types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 | 1.7 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 | 1.7 | 1.7 |
### Language extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Miscellaneous extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 |
-> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 |
-> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |
-> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.2 | 1.2 | 1.2 |
-> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 |
-> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 | 1.1 |
-> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.11 | 1.12 |
+> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 | 1.10 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 | 1.1 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Pgvector extension > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 |
### PostGIS extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
-> ||||||
-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** |
+> |||||||
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
## pg_stat_statements
-The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Cosmos DB for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
+The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Cosmos DB for PostgreSQL cluster to provide you with a means of tracking execution statistics of SQL statements.
The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`.
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 08/24/2023 Last updated : 09/27/2023 # Supported database versions in Azure Cosmos DB for PostgreSQL
customizable during creation. Azure Cosmos DB for PostgreSQL currently supports
following major [PostgreSQL versions](https://www.postgresql.org/docs/release/):
+### PostgreSQL version 16
+
+The current minor release is 16.0. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/16.0/) to
+learn more about improvements and fixes in this minor release.
+ ### PostgreSQL version 15 The current minor release is 15.4. Refer to the [PostgreSQL
policy](https://www.postgresql.org/support/versioning/).
| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | Nov 13, 2025 | | [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | Nov 12, 2026 | | [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/15/release-15.html) | Oct 20, 2022 | Nov 11, 2027 |
+| [PostgreSQL 16](https://www.postgresql.org/about/news/postgresql-16-released-2715/) | [Features](https://www.postgresql.org/docs/16/release-16.html) | Sep 28, 2023 | Nov 9, 2028 |
### Retired PostgreSQL engine versions not supported in Azure Cosmos DB for PostgreSQL
PostgreSQL database version:
Depending on which version of PostgreSQL is running in a cluster, different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, PostgreSQL 14 and PostgreSQL 15 come with Citus 12, PostgreSQL 13 comes with Citus 11, PostgreSQL 12 comes with Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
+will be installed as well. In particular, PostgreSQL 14, PostgreSQL 15, and PostgreSQL 16 come with Citus 12, PostgreSQL 13 comes with Citus 11, PostgreSQL 12 comes with Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
## Next steps
cosmos-db Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-private-access.md
az vm create \
--subnet link-demo-subnet \ --nsg link-demo-nsg \ --public-ip-address link-demo-net-ip \
- --image debian \
+ --image Debian11 \
--admin-username azureuser \ --generate-ssh-keys
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 10/23/2022 Last updated : 09/29/2023 # Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics
Last updated 10/23/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to Azure Blob Storage. It also describes how to use the Data Flow activity to transform data in Azure Blob Storage. To learn more read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
+This article outlines how to use the Copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to Azure Blob Storage. It also describes how to use the Data Flow activity to transform data in Azure Blob Storage. To learn more, read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
>[!TIP] >To learn about a migration scenario for a data lake or a data warehouse, see the article [Migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md).
For the Copy activity, this Blob storage connector supports:
Use the following steps to create an Azure Blob Storage linked service in the Azure portal UI.
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
# [Azure Data Factory](#tab/data-factory)
The following properties are supported for storage account key authentication in
| Property | Description | Required | |: |: |: | | type | The `type` property must be set to `AzureBlobStorage` (suggested) or `AzureStorage` (see the following notes). | Yes |
-| containerUri | Specify the Azure Blob container URI which has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](../storage/blobs/anonymous-read-access-configure.md#set-the-public-access-level-for-a-container) | Yes |
+| containerUri | Specify the Azure Blob container URI that has enabled Anonymous read access by taking this format `https://<AccountName>.blob.core.windows.net/<ContainerName>` and [Configure anonymous public read access for containers and blobs](../storage/blobs/anonymous-read-access-configure.md#set-the-anonymous-access-level-for-a-container) | Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | **Example:**
The following properties are supported for storage account key authentication in
**Examples UI**:
-The UI experience will be like below. This sample will use the Azure open dataset as the source. If you want to get the open [dataset bing_covid-19_data.csv](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv), you just need to choose **Authentication type** as **Anonymous** and fill in Container URI with `https://pandemicdatalake.blob.core.windows.net/public`.
+The UI experience is as described in the following image. This sample uses the Azure open dataset as the source. If you want to get the open [dataset bing_covid-19_data.csv](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv), you just need to choose **Authentication type** as **Anonymous** and fill in Container URI with `https://pandemicdatalake.blob.core.windows.net/public`.
:::image type="content" source="media/connector-azure-blob-storage/anonymous-ui.png" alt-text="Screenshot of configuration for Anonymous examples UI.":::
The following properties are supported for storage account key authentication in
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | >[!NOTE]
->A secondary Blob service endpoint is not supported when you're using account key authentication. You can use other authentication types.
+>A secondary Blob service endpoint isn't supported when you're using account key authentication. You can use other authentication types.
>[!NOTE] >If you're using the `AzureStorage` type linked service, it's still supported as is. But we suggest that you use the new `AzureBlobStorage` linked service type going forward.
These properties are supported for an Azure Blob Storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob Storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication isn't supported when account kind is empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes | | servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
These properties are supported for an Azure Blob Storage linked service:
>[!NOTE] >
->- If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication is not supported in Data Flow.
+>- If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication isn't supported in Data Flow.
>- If you access the blob storage through private endpoint using Data Flow, note when service principal authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in your data factory or Synapse workspace to enable access. >[!NOTE]
These properties are supported for an Azure Blob Storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob Storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication isn't supported when account kind is empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | **Example:**
These properties are supported for an Azure Blob Storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob Storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication isn't supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| credentials | Specify the user-assigned managed identity as the credential object. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
These properties are supported for an Azure Blob Storage linked service:
> [!NOTE] >
-> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), system-assigned/user-assigned managed identity authentication is not supported in Data Flow.
+> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), system-assigned/user-assigned managed identity authentication isn't supported in Data Flow.
> - If you access the blob storage through private endpoint using Data Flow, note when system-assigned/user-assigned managed identity authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in ADF to enable access. > [!NOTE]
The following properties are supported for Azure Blob Storage under `storeSettin
| type | The **type** property under `storeSettings` must be set to **AzureBlobStorageReadSettings**. | Yes | | ***Locate the files to copy:*** | | | | OPTION 1: static path<br> | Copy from the given container or folder/file path specified in the dataset. If you want to copy all blobs from a container or folder, additionally specify `wildcardFileName` as `*`. | |
-| OPTION 2: blob prefix<br>- prefix | Prefix for the blob name under the given container configured in a dataset to filter source blobs. Blobs whose names start with `container_in_dataset/this_prefix` are selected. It utilizes the service-side filter for Blob storage, which provides better performance than a wildcard filter.<br><br>When you use prefix and choose to copy to file-based sink with preserving hierarchy, note the sub-path after the last "/" in prefix will be preserved. For example, you have source `container/folder/subfolder/file.txt`, and configure prefix as `folder/sub`, then the preserved file path is `subfolder/file.txt`. | No |
+| OPTION 2: blob prefix<br>- prefix | Prefix for the blob name under the given container configured in a dataset to filter source blobs. Blobs whose names start with `container_in_dataset/this_prefix` are selected. It utilizes the service-side filter for Blob storage, which provides better performance than a wildcard filter.<br><br>When you use prefix and choose to copy to file-based sink with preserving hierarchy, note the sub-path after the last "/" in prefix is preserved. For example, you have source `container/folder/subfolder/file.txt`, and configure prefix as `folder/sub`, then the preserved file path is `subfolder/file.txt`. | No |
| OPTION 3: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters under the given container configured in a dataset to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character). Use `^` to escape if your folder name has wildcard or this escape character inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | | OPTION 3: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given container and folder path (or wildcard folder path) to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character). Use `^` to escape if your file name has a wildcard or this escape character inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
-| OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When you're using this option, do not specify a file name in the dataset. See more examples in [File list examples](#file-list-examples). | No |
+| OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When you're using this option, don't specify a file name in the dataset. See more examples in [File list examples](#file-list-examples). | No |
| ***Additional settings:*** | | | | recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when **recursive** is set to **true** and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. | No |
-| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. | No |
+| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file. Therefore, when the copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on the source store. <br/>This property is only valid in binary files copy scenario. The default value: false. | No |
| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be **NULL**, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is **NULL**, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is **NULL**, the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
-| modifiedDatetimeEnd | Same as above. | No |
-| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No |
-| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it is not specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/>- When you use prefix, partition root path is sub-path before the last "/". <br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path is not specified, no extra column will be generated. | No |
+| modifiedDatetimeEnd | Same as the previous property. | No |
+| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as extra source columns.<br/>Allowed values are **false** (default) and **true**. | No |
+| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/>- When you use prefix, partition root path is sub-path before the last "/". <br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | > [!NOTE]
The following properties are supported for Azure Blob Storage under `storeSettin
| | | -- | | type | The `type` property under `storeSettings` must be set to `AzureBlobStorageWriteSettings`. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file or blob name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
-| blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, the service automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data is not large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is big enough to store the data. Otherwise, the Copy activity run will fail. | No |
+| blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, the service automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data isn't large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is large enough to store the data. Otherwise, the Copy activity run will fail. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
Assume that you have the following source folder structure and want to copy the
| Sample source structure | Content in FileListToCopy.txt | Configuration | | | |
-| container<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Container: `container`<br>- Folder path: `FolderA`<br><br>**In Copy activity source:**<br>- File list path: `container/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line, with the relative path to the path configured in the dataset. |
+| container<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Container: `container`<br>- Folder path: `FolderA`<br><br>**In Copy activity source:**<br>- File list path: `container/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy. It includes one file per line, with the relative path to the path configured in the dataset. |
### Some recursive and copyBehavior examples
This section describes the resulting behavior of the Copy operation for differen
| true |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the same structure as the source:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | | true |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File5 | | true |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 + File3 + File4 + File5 contents are merged into one file with an autogenerated file name. |
-| false |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/><br/>Subfolder1 with File3, File4, and File5 is not picked up. |
-| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 is not picked up. |
-| false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 is not picked up. |
+| false |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target folder, Folder1, is created with the following structure:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
## Preserving metadata during copy
First, set a wildcard to include all paths that are the partitioned folders plus
:::image type="content" source="media/data-flow/part-file-2.png" alt-text="Screenshot of partition source file settings in mapping data flow source transformation.":::
-Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
+Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service adds the resolved partitions found in each of your folder levels.
:::image type="content" source="media/data-flow/partfile1.png" alt-text="Partition root path":::
Use the **Partition root path** setting to define what the top level of the fold
To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting will be the same folder as your source folder.
-If you have a source path with wildcard, your syntax will look like this:
+If you have a source path with wildcard, your syntax is as follows:
`/data/sales/20??/**/*.csv`
And you can specify "to" as:
In this case, all files that were sourced under `/data/sales` are moved to `/backup/priorSales`. > [!NOTE]
-> File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations *do not* run in Data Flow debug mode.
+> File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations *don't* run in Data Flow debug mode.
-**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.
+**Filter by last modified:** You can filter the files to be processed by specifying a date range of when they were last modified. All datetimes are in UTC.
-**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs.
+**Enable change data capture:** If true, you'll get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs.
:::image type="content" source="media/data-flow/enable-change-data-capture.png" alt-text="Screenshot showing Enable change data capture.":::
In the sink transformation, you can write to either a container or a folder in A
**File name option:** Determines how the destination files are named in the destination folder. The file name options are: - **Default**: Allow Spark to name files based on PART defaults.
- - **Pattern**: Enter a pattern that enumerates your output files per partition. For example, `loans[n].csv` will create `loans1.csv`, `loans2.csv`, and so on.
+ - **Pattern**: Enter a pattern that enumerates your output files per partition. For example, `loans[n].csv` creates `loans1.csv`, `loans2.csv`, and so on.
- **Per partition**: Enter one file name per partition.
- - **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it will be overridden.
+ - **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it is overridden.
- **Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Be aware that the merge operation can possibly fail based on node size. We don't recommend this option for large datasets. **Quote all:** Determines whether to enclose all values in quotation marks.
To learn details about the properties, check [Delete activity](delete-activity.m
| type | The `type` property of the dataset must be set to `AzureBlob`. | Yes | | folderPath | Path to the container and folder in Blob storage. <br/><br/>A wildcard filter is supported for the path, excluding container name. Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character). Use `^` to escape if your folder name has a wildcard or this escape character inside. <br/><br/>An example is: `myblobcontainer/myblobfolder/`. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes for the Copy or Lookup activity, No for the GetMetadata activity | | fileName | Name or wildcard filter for the blobs under the specified `folderPath` value. If you don't specify a value for this property, the dataset points to all blobs in the folder. <br/><br/>For the filter, allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character).<br/>- Example 1: `"fileName": "*.csv"`<br/>- Example 2: `"fileName": "???20180427.txt"`<br/>Use `^` to escape if your file name has a wildcard or this escape character inside.<br/><br/>When `fileName` isn't specified for an output dataset and `preserveHierarchy` isn't specified in the activity sink, the Copy activity automatically generates the blob name with the following pattern: "*Data.[activity run ID GUID].[GUID if FlattenHierarchy].[format if configured].[compression if configured]*". For example: "Data.0a405f8a-93ff-4c6f-b3be-f69616f1df7a.txt.gz". <br/><br/>If you copy from a tabular source by using a table name instead of a query, the name pattern is `[table name].[format].[compression if configured]`. For example: "MyTable.csv". | No |
-| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting will affect the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
-| modifiedDatetimeEnd | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting will affect the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
+| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting affects the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
+| modifiedDatetimeEnd | Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". <br/><br/> Be aware that enabling this setting affects the overall performance of data movement when you want to filter huge amounts of files. <br/><br/> The properties can be `NULL`, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is `NULL`, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is `NULL`, the files whose last modified attribute is less than the datetime value will be selected.| No |
| format | If you want to copy files as is between file-based stores (binary copy), skip the format section in both the input and output dataset definitions.<br/><br/>If you want to parse or generate files with a specific format, the following file format types are supported: **TextFormat**, **JsonFormat**, **AvroFormat**, **OrcFormat**, and **ParquetFormat**. Set the **type** property under **format** to one of these values. For more information, see the [Text format](supported-file-formats-and-compression-codecs-legacy.md#text-format), [JSON format](supported-file-formats-and-compression-codecs-legacy.md#json-format), [Avro format](supported-file-formats-and-compression-codecs-legacy.md#avro-format), [Orc format](supported-file-formats-and-compression-codecs-legacy.md#orc-format), and [Parquet format](supported-file-formats-and-compression-codecs-legacy.md#parquet-format) sections. | No (only for binary copy scenario) | | compression | Specify the type and level of compression for the data. For more information, see [Supported file formats and compression codecs](supported-file-formats-and-compression-codecs-legacy.md#compression-support).<br/>Supported types are **GZip**, **Deflate**, **BZip2**, and **ZipDeflate**.<br/>Supported levels are **Optimal** and **Fastest**. | No |
To learn details about the properties, check [Delete activity](delete-activity.m
| Property | Description | Required | |: |: |: | | type | The `type` property of the Copy activity source must be set to `BlobSource`. | Yes |
-| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when `recursive` is set to `true` and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink.<br/>Allowed values are `true` (default) and `false`. | No |
+| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. When `recursive` is set to `true` and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink.<br/>Allowed values are `true` (default) and `false`. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | **Example:**
To learn details about the properties, check [Delete activity](delete-activity.m
## Change data capture
-Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Pleaser refer to [Change Data Capture](concepts-change-data-capture.md) for details.
+Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Please refer to [Change Data Capture](concepts-change-data-capture.md) for details.
. ## Next steps
-For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
databox-online Azure Stack Edge Deploy Aks On Azure Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md
Previously updated : 09/26/2023 Last updated : 09/28/2023 # Customer intent: As an IT admin, I need to understand how to deploy and configure Azure Kubernetes service on Azure Stack Edge.
Depending on the workloads you intend to deploy, you may need to ensure the foll
For more information, see [Create and manage custom locations in Arc-enabled Kubernetes](../azure-arc/kubernetes/custom-locations.md). -- If deploying Kubernetes or PMEC workloads, you may need virtual networks that youΓÇÖve added using the instructions in [Create virtual networks](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node#configure-virtual-network).
+- If deploying Kubernetes or PMEC workloads:
+ - You may have selected a specific workload profile using the local UI or using PowerShell. Detailed steps are documented for the local UI in [Configure compute IPS](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1) and for PowerShell in [Change Kubernetes workload profiles](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles).
+ - You may need virtual networks that youΓÇÖve added using the instructions in [Create virtual networks](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node#configure-virtual-network).
- If you're using HPN VMs as your infrastructure VMs, the vCPUs should be automatically reserved. Run the following command to verify the reservation:
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
Previously updated : 04/14/2022 Last updated : 09/28/2023 # Manage an Azure Stack Edge Pro GPU device via Windows PowerShell
If the compute role is configured on your device, you can also get the compute l
- `Credential`: Provide the username for the network share. When you run this cmdlet, you will need to provide the share password. - `FullLogCollection`: This parameter ensures that the log package will contain all the compute logs. By default, the log package contains only a subset of logs.
+## Change Kubernetes workload profiles
+
+After you have formed and configured a cluster and you have created new virtual switches, you can add or delete virtual networks associated with your virtual switches. For detailed steps, see [Configure virtual switches](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-virtual-switches-1).
+
+After virtual switches are created, you can enable the switches for Kubernetes compute traffic to specify a Kubernets workload profile. To do so using the local UI, use the steps in [Configure compute IPS](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1). To do so using PowerShell, use the following steps:
+
+1. [Connect to the PowerShell interface](#connect-to-the-powershell-interface).
+2. Use the `Get-HcsApplianceInfo` cmdlet to get current `KubernetesPlatform` and `KubernetesWorkloadProfile` settings for your device.
+
+ The following example shows the usage of this cmdlet:
+
+ ```powershell
+ Get-HcsApplianceInfo
+ ```
+
+3. Use the `Set-HcsKubernetesWorkloadProfile` cmdlet to set the workload profile for AP5GC an Azure Private MEC solution.
+
+ The following example shows the usage of this cmdlet:
+
+ ```powershell
+ Set-HcsKubernetesWorkloadProfile -Type "AP5GC"
+ ```
+
+ Here is sample output for this cmdlet:
+
+ ```powershell
+ [10.100.10.10]: PS>KubernetesPlatform : AKS
+ [10.100.10.10]: PS>KubernetesWorkloadProfile : AP5GC
+ [10.100.10.10]: PS>
+ ```
## Change Kubernetes pod and service subnets
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 09/22/2023 Last updated : 09/28/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
After the virtual switches are created, you can enable the switches for Kubernetes compute traffic. 1. In the local UI, go to the **Kubernetes** page.
-1. Specify a workload from the options provided. If prompted, confirm the option you selected and then select **Apply**.
+1. Specify a workload from the options provided.
+ - If you are working with an Azure Private MEC solution, select the option for **an Azure Private MEC solution in your environment**.
+ - If you are working with an SAP Digital Manufacturing solution or another Microsoft partner solution, select the option for **a SAP Digital Manufacturing for Edge Computing or another Microsoft partner solution in your environment**.
+ - For other workloads, select the option for **other workloads in your environment**.
+
+ If prompted, confirm the option you specified and then select **Apply**.
+
+ To use PowerShell to specify the workload, see detailed steps in [Change Kubernetes workload profiles](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles).
![Screenshot of the Workload selection options on the Kubernetes page of the local UI for two node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/azure-stack-edge-kubernetes-workload-selection.png)
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
Previously updated : 06/16/2022 Last updated : 09/29/2023 #Customer intent: As an IT admin, I need to be able to export data from Azure to another location, such as, another cloud provider or my location.
To use an XML file to export your data:
![Select Export option, Containers](media/data-box-deploy-export-ordered/azure-data-box-export-sms-use-xml-file-containers-option.png)
-3. In **New Container** tab that pops out from the right side of the Azure portal, add a name for the container. The name must be lower-case and you may include numbers and dashes '-'. Then select the **Public access level** from the drop-down list box. We recommend that you choose **Private (non anonymous access)** to prevent others from accessing your data. For more information regarding container access levels, see [Container access permissions](../storage/blobs/anonymous-read-access-configure.md#set-the-public-access-level-for-a-container).
+3. In **New Container** tab that pops out from the right side of the Azure portal, add a name for the container. The name must be lower-case and you may include numbers and dashes '-'. Then select the **Public access level** from the drop-down list box. We recommend that you choose **Private (non anonymous access)** to prevent others from accessing your data. For more information regarding container access levels, see [Container access permissions](../storage/blobs/anonymous-read-access-configure.md#set-the-anonymous-access-level-for-a-container).
![Select Export option, New container settings](media/data-box-deploy-export-ordered/azure-data-box-export-sms-use-xml-file-container-settings.png)
defender-for-cloud Defender For Apis Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-deploy.md
Title: Protect your APIs with Defender for APIs (Preview)
+ Title: Protect your APIs with Defender for APIs
description: Learn about deploying the Defender for APIs plan in Defender for Cloud
defender-for-cloud Defender For Apis Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-manage.md
There are three types of APIs you can query:
- **API Endpoints** - A group of all types of API endpoints. -- **API Management** services - API management services are platforms that provide tools and infrastructure for managing APIs, typically through a web-based interface. They often include features such as: API gateway, API portal, API analytics and API security.
+- **API Management services** - API management services are platforms that provide tools and infrastructure for managing APIs, typically through a web-based interface. They often include features such as: API gateway, API portal, API analytics and API security.
**To query APIs in the cloud security graph**:
defender-for-cloud Defender For Apis Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-posture.md
Last updated 05/08/2023
# Investigate API findings, recommendations, and alerts
-This article describes how to investigate API security findings, alerts, and security posture recommendations for APIs protected by [Microsoft Defender for APIs](defender-for-apis-introduction.md). Defender for APIs is currently in preview.
+This article describes how to investigate API security findings, alerts, and security posture recommendations for APIs protected by [Microsoft Defender for APIs](defender-for-apis-introduction.md).
## Before you start
When the Defender CSPM plan is enabled together with Defender for APIs, you can
1. In the Defender for Cloud portal, select **Cloud Security Explorer**. 1. In **What would you like to search?** select the **APIs** category. 1. Review the search results so that you can review, prioritize, and fix any API issues.
+1. Alternatively, you can select one of the templated API queries to see high risk issues like **Internet exposed API endpoints with sensitive data** or **APIs communicating over unencrypted protocols with unauthenticated API endpoints**
## Next steps
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
Review the latest cloud support information for Defender for Cloud plans and fea
Availability | This feature is available in the Premium, Standard, Basic, and Developer tiers of Azure API Management. API gateways | Azure API Management<br/><br/> Defender for APIs currently doesn't onboard APIs that are exposed using the API Management [self-hosted gateway](../api-management/self-hosted-gateway-overview.md), or managed using API Management [workspaces](../api-management/workspaces-overview.md). API types | Currently, Defender for APIs discovers and analyzes REST APIs.
-Multi-region support | In multi-region Azure API Management instances, some ML-based detections and security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions. In such cases, data residency requirements are still met.ΓÇ»
+Multi-region support | In multi-regional managed and self-hosted Azure API Management deployments, security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions. In such cases, data residency requirements are still met.ΓÇ»
## Defender CSPM integration
defender-for-cloud Defender For Apis Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-validation.md
This page will walk you through the steps to trigger an alert for one of your AP
1. In the key field enter **User-Agent**.
-1. In the value field enter **jvascript:**.
+1. In the value field enter **javascript:**.
:::image type="content" source="media/defender-for-apis-validation/postman-keys.png" alt-text="Screenshot that shows where to enter the keys and their values in Postman.":::
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
Code: EnvironmentNotFound
Message: The environment resource was not found. ```
-To resolve the issue, assign the correct permissions: [Give project access to the development team](quickstart-create-and-configure-projects.md#give-project-access-to-the-development-team).
+To resolve the issue, assign the correct permissions: [Give access to the development team](quickstart-create-and-configure-projects.md#give-access-to-the-development-team).
## Access an environment
deployment-environments How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-request-quota-increase.md
+
+ Title: Request a quota limit increase for Azure Deployment Environments resources
+description: Learn how to request a quota increase to extend the number of Deployment Environments resources you can use in your subscription.
+++++ Last updated : 09/27/2023++
+# Request a quota limit increase for Azure Deployment Environments resources
+
+This article describes how to submit a support request for increasing the number of resources available to Azure Deployment Environments in your Azure subscription.
+
+If your organization uses Deployment Environments extensively, you may encounter a quota limit during deployment. When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Azure Deployment Environments team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
+
+Learn more about the general [process for creating Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+## Prerequisites
+
+- To create a support request, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Support Request Contributor](/azure/role-based-access-control/built-in-roles#support-request-contributor) role at the subscription level.
+- Before you create a support request for a limit increase, you need to gather additional information.
+
+## Gather information for your request
+
+Submitting a support request for additional quota is quicker if you gather the required information before you begin the request process.
+
+- **Identify the quota type**
+
+ If you reach the quota limit for a Deployment Environments resource, you see a notification indicating which quota type is affected during deployment. Take note of it and submit a request for that quota type.
+
+ The following resources are limited by subscription.
+
+ - Runtime limit per month (mins)
+ - Runtime limit per deployment (mins)
+ - Storage limit per environment (GBs)
++
+- **Determine the region for the additional quota**
+
+ Deployment Environments resources can exist in many regions. You should choose the region where your Deployment Environments Project exists for best performance.
+
+ For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+## Submit a new support request
+
+Follow these steps to request a limit increase:
+
+1. On the Azure portal home page, select Support & troubleshooting, and then select **Help + support**
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/submit-new-request.png" alt-text="Screenshot of the Azure portal home page, highlighting the Request core limit increase button." lightbox="./media/how-to-request-capacity-increase/submit-new-request.png":::
+
+1. On the **Help + support** page, select **Create a support request**.
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/create-support-request.png" alt-text="Screenshot of the Help + support page, highlighting Create a support request." lightbox="./media/how-to-request-capacity-increase/create-support-request.png":::
+
+1. On the **New support request** page, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Issue type** | *Service and subscription limits (quotas)* |
+ | **Subscription** | Select the subscription to which the request applies. |
+ | **Quota type** | *Azure Deployment Environments* |
+
+1. On the **Additional details** tab, in the **Problem details** section, select **Enter details**.
+
+ :::image type="content" source="media/how-to-request-capacity-increase/enter-details.png" alt-text="Screenshot of the New support request page, highlighting Enter details." lightbox="media/how-to-request-capacity-increase/enter-details.png":::
+
+1. In **Quota details**, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Quota type** | Select the **Quota type** that you want to increase. |
+ | **Region** | Select the **Region** in which you want to increase your quota. |
+ | **Additional quota** | Enter the additional number of minutes that you need, or GBs per environment for Storage limit increases. |
+ | **Additional info** | Enter any extra information about your request. |
+
+ :::image type="content" source="media/how-to-request-capacity-increase/quota-details.png" alt-text="Screenshot of the Quota details pane." lightbox="media/how-to-request-capacity-increase/quota-details.png":::
+
+1. Select **Save and continue**.
+## Complete the support request
+
+To complete the support request, enter the following information:
+
+1. Complete the remainder of the support request **Additional details** tab using the following information:
+
+ ### Advanced diagnostic information
+
+ |Name |Value |
+ |||
+ |**Allow collection of advanced diagnostic information**|Select yes or no.|
+
+ ### Support method
+
+ |Name |Value |
+ |||
+ |**Support plan**|Select your support plan.|
+ |**Severity**|Select the severity of the issue.|
+ |**Preferred contact method**|Select email or phone.|
+ |**Your availability**|Enter your availability.|
+ |**Support language**|Select your language preference.|
+
+ ### Contact information
+
+ |Name |Value |
+ |||
+ |**First name**|Enter your first name.|
+ |**Last name**|Enter your last name.|
+ |**Email**|Enter your contact email.|
+ |**Additional email for notification**|Enter an email for notifications.|
+ |**Phone**|Enter your contact phone number.|
+ |**Country/region**|Enter your location.|
+ |**Save contact changes for future support requests.**|Select the check box to save changes.|
+
+1. Select **Next**.
+
+1. On the **Review + create** tab, review the information, and then select **Create**.
+
+## Related content
+
+- Check the default quota for each resource type by subscription type: [Azure Deployment Environments limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-deployment-environments-limits)
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 04/25/2023 Last updated : 09/06/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments
A platform engineering team typically sets up a dev center, attaches external ca
The following diagram shows the steps you perform in this quickstart to configure a dev center for Azure Deployment Environments in the Azure portal. -
-First, you create a dev center to organize your deployment environments resources. Next, you create a key vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Then, you attach an identity to the dev center and assign that identity access to the key vault. Then, you add a catalog that stores your IaC templates to the dev center. Finally, you create environment types to define the types of environments that development teams can create.
--
-The following diagram shows the steps you perform in the [Create and configure a project quickstart](quickstart-create-and-configure-projects.md) to configure a project associated with a dev center for Deployment Environments.
- You need to perform the steps in both quickstarts before you can create a deployment environment.
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
-## Create a Key Vault
-You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. In this quickstart, you create an RBAC Key Vault. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
-
-If you don't have an existing key vault, use the following steps to create one:
+### Create a Key Vault
+When you are using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the Search box, enter *Key Vault*.
-1. From the results list, select **Key Vault**.
-1. On the Key Vault page, select **Create**.
-1. On the Create key vault tab, provide the following information:
+If you don't have an existing key vault, use the following steps to create one: [Quickstart: Create a key vault using the Azure portal](/azure/key-vault/general/quick-create-portal).
- |Name |Value |
- |-|--|
- |**Name**|Enter a name for the key vault.|
- |**Subscription**|Select the subscription in which you want to create the key vault.|
- |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.|
- |**Location**|Select the location or region where you want to create the key vault.|
-
- Leave the other options at their defaults.
-
-1. On the Access configuration tab, select **Azure role-based access control**, and then select **Review + create**.
-
-1. On the Review + create tab, select **Create**.
-
-## Create a personal access token
+### Configure a personal access token
Using an authentication token like a GitHub PAT enables you to share your repository securely. GitHub offers classic PATs, and fine-grained PATs. Fine-grained and classic PATs work with Azure Deployment Environments, but fine-grained tokens give you more granular control over the repositories to which you're allowing access. > [!TIP]
Using an authentication token like a GitHub PAT enables you to share your reposi
- Select **Create**. 1. Leave this tab open, you need to come back to the Key Vault later.
-## Attach an identity to the dev center
+## Configure a managed identity for the dev center
After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-In this quickstart, you configure a system-assigned managed identity for your dev center.
+In this quickstart, you configure a system-assigned managed identity for your dev center. You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the key vault secret that contains the GitHub PAT.
### Attach a system-assigned managed identity
To attach a system-assigned managed identity to your dev center:
1. In the **Enable system assigned managed identity** dialog, select **Yes**.
-### Assign the system-assigned managed identity access to the key vault secret
-Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository. Key Vaults support two methods of access; Azure role-based access control (RBAC) or Vault access policy. In this quickstart, you use an RBAC key vault.
+### Assign roles for the dev center managed identity
-Configure vault access:
-1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the key vault secret that stores your GitHub PAT.
-1. In the left menu, select **Access control (IAM)**.
+1. Navigate to your dev center.
+1. On the left menu under Settings, select **Identity**.
+1. Under System assigned > Permissions, select **Azure role assignments**.
-1. Select **Add** > **Add role assignment**.
+ :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
- | Setting | Value |
- | | |
- | **Role** | Select **Key Vault Secrets User**. |
- | **Assign access to** | Select **Managed identity**. |
- | **Members** | Select the dev center managed identity that you created in [Attach a system-assigned managed identity](#attach-a-system-assigned-managed-identity). |
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|Owner|
+
+1. To give access to the key vault, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Key Vault|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Resource**|Select the key vault that you created earlier.|
+ |**Role**|Key Vault Secrets User|
## Add a catalog to the dev center Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
You also need the path to the secret you created in the key vault.
| **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` | | **Branch** | Enter the repository branch to connect to.<br />*Sample catalog example:* `main`| | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders that hold your environment definitions. <br /> The folder path is for the folder with subfolders containing environment definition manifests, not for the folder with the environment definition manifest itself. The following image shows the sample catalog folder structure.<br />*Sample catalog example:* `/Environments`<br /> :::image type="content" source="media/how-to-configure-catalog/github-folders.png" alt-text="Screenshot showing Environments sample folder in GitHub."::: The folder path can begin with or without a forward slash (`/`).|
- | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+ | **Secret identifier**| Enter the [secret identifier](#configure-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
:::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Previously updated : 04/25/2023 Last updated : 09/06/2023 # Quickstart: Create and configure a project
-This quickstart shows you how to create a project in Azure Deployment Environments. Then, you associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
+This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
-A platform engineering team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
-
-The following diagram shows the steps you perform in the [Create and configure a dev center for Azure Deployment Environments](quickstart-create-and-configure-devcenter.md) quickstart to configure a dev center for Azure Deployment Environments in the Azure portal. You must perform these steps before you can create a project.
-
-
The following diagram shows the steps you perform in this quickstart to configure a project associated with a dev center for Deployment Environments in the Azure portal. First, you create a project. Then, assign the dev center managed identity the Owner role to the subscription. Then, you configure the project by creating a project environment type. Finally, you give the development team access to the project by assigning the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role to the project. You need to perform the steps in both quickstarts before you can create a deployment environment.
-For more information on how to create an environment, see [Quickstart: Create and access Azure Deployment Environments by using the developer portal](quickstart-create-access-environments.md).
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- An Azure Deployment Environments dev center with a catalog attached. If you don't have a dev center with a catalog, see [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
## Create a project
To create a project in your dev center:
1. On the **Review + Create** tab, wait for deployment validation, and then select **Create**.
- :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-review-create.png" alt-text="Screenshot that shows selecting the Review + Create button to validate and create a project.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/create-project.png" alt-text="Screenshot that shows selecting the create project basics tab.":::
1. Confirm that the project was successfully created by checking your Azure portal notifications. Then, select **Go to resource**.
To create a project in your dev center:
:::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane.":::
-### Assign a managed identity the owner role to the subscription
-Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
-
-In this quickstart you assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
-
-1. Navigate to your dev center.
-1. On the left menu under Settings, select **Identity**.
-1. Under System assigned > Permissions, select **Azure role assignments**.
-
- :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
-
-1. In Azure role assignments, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
- |Name |Value |
- ||-|
- |**Scope**|Subscription|
- |**Subscription**|Select the subscription in which to use the managed identity.|
- |**Role**|Owner|
-
-## Configure a project
+## Create a project environment type
To configure a project, add a [project environment type](how-to-configure-project-environment-types.md):
To configure a project, add a [project environment type](how-to-configure-projec
> [!NOTE] > At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Owner role](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
-## Give project access to the development team
+## Give access to the development team
1. In the Azure portal, go to your project.
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
Last updated 08/21/2023
-# Determine resource usage and quota
+# Determine resource usage and quota for Microsoft Dev Box
To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
-## Determine your usage and quota
+## Determine your Dev Box usage and quota by subscription
1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
dev-box How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md
Last updated 08/22/2023
-# Request a quota limit increase
+# Request a quota limit increase for Microsoft Dev Box resources
This article describes how to submit a support request for increasing the number of resources for Microsoft Dev Box in your Azure subscription. When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Microsoft Dev Box team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
-The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often, but to ensure you have the resources you require when you need them, you should:
+The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often. To ensure you have the resources you require when you need them, you should:
- Request capacity as far in advance as possible. - If possible, be flexible on the region where you're requesting capacity.
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
With self-serve scalable clusters, you can purchase up to 10 CUs for a cluster i
If you need a cluster larger than 10 CU, you can [submit a support request](event-hubs-dedicated-cluster-create-portal.md#submit-a-support-request) to scale up your cluster after its creation. > [!IMPORTANT]
-> Self-serve scalable Dedicated can be deployed with [availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) enabled with 3 CUs but you won't be able to use the self-serve scaling capability to scale the cluster. You must instead [submit a support request](event-hubs-dedicated-cluster-create-portal.md#submit-a-support-request) to scale the AZ enabled cluster.
+> Self-serve scalable Dedicated can be deployed with [availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) enabled with 3 CUs but you won't be able to use the self-serve scaling capability to scale the cluster. To create or scale an AZ enabled self-serve cluster you must [submit a support request](event-hubs-dedicated-cluster-create-portal.md#submit-a-support-request).
### Legacy clusters Event Hubs Dedicated clusters created prior to the availability of self-serve scalable clusters are referred to as legacy clusters.
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
{ // if it is too large for the batch throw new Exception($"Event {i} is too large for the batch and cannot be sent.");
- Console.ReadLine();
} }
hdinsight Hbase Troubleshoot Hbase Hbck Inconsistencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-hbase-hbck-inconsistencies.md
Title: hbase hbck returns inconsistencies in Azure HDInsight
description: hbase hbck returns inconsistencies in Azure HDInsight Previously updated : 08/28/2022 Last updated : 09/19/2023 # Scenario: `hbase hbck` command returns inconsistencies in Azure HDInsight
Varies.
## Issue: Region is offline
-Region xxx not deployed on any RegionServer. This means the region is in `hbase:meta`, but offline.
+Region xxx not deployed on any RegionServer. It means the region is in `hbase:meta`, but offline.
### Cause
Bring regions online by running:
hbase hbck -ignorePreCheckPermission ΓÇôfixAssignment ```
-Alternatively, run `assign <region-hash>` on hbase-shell to force to assign this region
+Alternatively, run `assign <region-hash>` on hbase-shell to force assign this region
Varies.
### Resolution
-Manually merge those overlapped regions. Go to HBase HMaster Web UI table section, select the table link, which has the issue. You will see start key/end key of each region belonging to that table. Then merge those overlapped regions. In HBase shell, do `merge_region 'xxxxxxxx','yyyyyyy', true`. For example:
+Manually merge those overlapped regions. Go to HBase HMaster Web UI table section, select the table link, which has the issue. You see start key/end key of each region belonging to that table. Then merge those overlapped regions. In HBase shell, do `merge_region 'xxxxxxxx','yyyyyyy', true`. For example:
``` RegionA, startkey:001, endkey:010,
Can't load `.regioninfo` for region `/hbase/data/default/tablex/regiony`.
### Cause
-This is most likely due to region partial deletion when RegionServer crashes or VM reboots. Currently, the Azure Storage is a flat blob file system and some file operations are not atomic.
+It is most likely due to region partial deletion when RegionServer crashes or VM reboots. Currently, the Azure Storage is a flat blob file system and some file operations are not atomic.
### Resolution
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md
description: Learn how to create HDInsight clusters with your own custom Apache
Previously updated : 08/16/2022 Last updated : 09/29/2023 # Set up HDInsight clusters with a custom Ambari DB
The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari
The remainder of this article discusses the following points: - requirements to use the custom Ambari DB feature-- the steps necessary to provision HDInsight clusters using your own external database for Apache Ambari
+- the steps necessary to provision HDInsight cluster using your own external database for Apache Ambari
## Custom Ambari DB requirements
The custom Ambari DB has the following other requirements:
- You must have an existing Azure SQL DB server and database. - The database that you provide for Ambari setup must be empty. There should be no tables in the default dbo schema. - The user used to connect to the database should have SELECT, CREATE TABLE, and INSERT permissions on the database.-- Turn on the option to [Allow access to Azure services](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#azure-portal-steps) on the server where you will host Ambari.
+- Turn on the option to [Allow access to Azure services](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#azure-portal-steps) on the server where you host Ambari.
- Management IP addresses from HDInsight service need to be allowed in the firewall rule. See [HDInsight management IP addresses](hdinsight-management-ip-addresses.md) for a list of the IP addresses that must be added to the server-level firewall rule. When you host your Apache Ambari DB in an external database, remember the following points: -- You're responsible for the additional costs of the Azure SQL DB that holds Ambari.
+- You're responsible for the extra costs of the Azure SQL DB that holds Ambari.
- Back up your custom Ambari DB periodically. Azure SQL Database generates backups automatically, but the backup retention time-frame varies. For more information, see [Learn about automatic SQL Database backups](/azure/azure-sql/database/automated-backups-overview). - Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It is not supported.
When you host your Apache Ambari DB in an external database, remember the follow
To create an HDInsight cluster that uses your own external Ambari database, use the [custom Ambari DB Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-custom-ambari-db).
-Edit the parameters in the `azuredeploy.parameters.json` to specify information about your new cluster and the database that will hold Ambari.
+Edit the parameters in the `azuredeploy.parameters.json` to specify information about your new cluster and the database that holds Ambari.
You can begin the deployment using the Azure CLI. Replace `<RESOURCEGROUPNAME>` with the resource group where you want to deploy your cluster.
hdinsight Hdinsight Hadoop Collect Debug Heap Dump Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-collect-debug-heap-dump-linux.md
description: Enable heap dumps for Apache Hadoop services from Linux-based HDIns
Previously updated : 07/19/2022 Last updated : 09/19/2023 # Enable heap dumps for Apache Hadoop services on Linux-based HDInsight
hdinsight Hdinsight Migrate Granular Access Cluster Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md
Title: Granular role-based access Azure HDInsight cluster configurations
description: Learn about the changes required as part of the migration to granular role-based access for HDInsight cluster configurations. Previously updated : 06/29/2022 Last updated : 09/19/2023 # Migrate to granular role-based access for cluster configurations
Previously, secrets could be obtained via the HDInsight API by cluster users
possessing the Owner, Contributor, or Reader [Azure roles](../role-based-access-control/rbac-and-directory-admin-roles.md), as they were available to anyone with the `*/read` permission. Secrets are defined as values that could be used to obtain more elevated access than a user's role should allow. These include values such as cluster gateway HTTP credentials, storage account keys, and database credentials.
-Beginning on September 3, 2019, accessing these secrets will require the `Microsoft.HDInsight/clusters/configurations/action` permission, meaning they can no longer be accessed by users with the Reader role. The roles that have this permission are Contributor, Owner, and the new HDInsight Cluster Operator role (more on that below).
+Beginning on September 3, 2019, accessing these secrets will require the `Microsoft.HDInsight/clusters/configurations/action` permission, user cannot access it with the Reader role. The roles that have this permission are Contributor, Owner, and the new HDInsight Cluster Operator role.
-We are also introducing a new [HDInsight Cluster Operator](../role-based-access-control/built-in-roles.md#hdinsight-cluster-operator) role
-that will be able to retrieve secrets without being granted the administrative
-permissions of Contributor or Owner. To summarize:
+We are also introducing a new [HDInsight Cluster Operator](../role-based-access-control/built-in-roles.md#hdinsight-cluster-operator) role that able to retrieve secrets without being granted the administrative permissions of Contributor or Owner. To summarize:
| Role | Previously | Going Forward | ||--|--|
The following entities and scenarios are affected:
- [API](#api): Users using the `/configurations` or `/configurations/{configurationName}` endpoints. - [Azure HDInsight Tools for Visual Studio Code](#azure-hdinsight-tools-for-visual-studio-code) version 1.1.1 or below. - [Azure Toolkit for IntelliJ](#azure-toolkit-for-intellij) version 3.20.0 or below.-- [Azure Data Lake and Stream Analytics Tools for Visual Studio](#azure-data-lake-and-stream-analytics-tools-for-visual-studio) below version 2.3.9000.1.
+- [Azure Data Lake and Stream Analytics Tools for Visual Studio](#azure-data-lake-and-stream-analytics-tools-for-visual-studio) version 2.3.9000.1.
- [Azure Toolkit for Eclipse](#azure-toolkit-for-eclipse) version 3.15.0 or below. - [SDK for .NET](#sdk-for-net) - [versions 1.x or 2.x](#versions-1x-and-2x): Users using the `GetClusterConfigurations`, `GetConnectivitySettings`, `ConfigureHttpSettings`, `EnableHttp` or `DisableHttp` methods from the ConfigurationsOperationsExtensions class.
The following entities and scenarios are affected:
- [SDK for Python](#sdk-for-python): Users using the `get` or `update` methods from the `ConfigurationsOperations` class. - [SDK for Java](#sdk-for-java): Users using the `update` or `get` methods from the `ConfigurationsInner` class. - [SDK for Go](#sdk-for-go): Users using the `Get` or `Update` methods from the `ConfigurationsClient` struct.-- [Az.HDInsight PowerShell](#azhdinsight-powershell) below version 2.0.0.
+- [Az.HDInsight PowerShell](#azhdinsight-powershell) version 2.0.0.
See the below sections (or use the above links) to see the migration steps for your scenario. ### API
-The following APIs will be changed or deprecated:
+The following APIs are changed or deprecated:
- [**GET /configurations/{configurationName}**](/rest/api/hdinsight/hdinsight-cluster#get-configuration) (sensitive information removed) - Previously used to obtain individual configuration types (including secrets).
If you are using version 3.15.0 or below, update to the [latest version of the A
Update to [version 2.1.0](https://www.nuget.org/packages/Microsoft.Azure.Management.HDInsight/2.1.0) of the HDInsight SDK for .NET. Minimal code modifications may be required if you are using a method affected by these changes: - `ClusterOperationsExtensions.GetClusterConfigurations` will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use `ClusterOperationsExtensions.ListConfigurations` going forward. Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use `ClusterOperationsExtensions.ListConfigurations` going forward. Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use `ClusterOperationsExtensions.GetGatewaySettings`. - `ClusterOperationsExtensions.GetConnectivitySettings` is now deprecated and has been replaced by `ClusterOperationsExtensions.GetGatewaySettings`.
Update to [version 2.1.0](https://www.nuget.org/packages/Microsoft.Azure.Managem
Update to [version 5.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.HDInsight/5.0.0) or later of the HDInsight SDK for .NET. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationOperationsExtensions.Get`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.get) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use [`ConfigurationOperationsExtensions.List`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.list) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use [`ConfigurationOperationsExtensions.List`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.list) going forward.ΓÇ» Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use [`ClusterOperationsExtensions.GetGatewaySettings`](/dotnet/api/microsoft.azure.management.hdinsight.clustersoperationsextensions.getgatewaysettings). - [`ConfigurationsOperationsExtensions.Update`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.update) is now deprecated and has been replaced by [`ClusterOperationsExtensions.UpdateGatewaySettings`](/dotnet/api/microsoft.azure.management.hdinsight.clustersoperationsextensions.updategatewaysettings). - [`ConfigurationsOperationsExtensions.EnableHttp`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.enablehttp) and [`DisableHttp`](/dotnet/api/microsoft.azure.management.hdinsight.configurationsoperationsextensions.disablehttp) are now deprecated. HTTP is now always enabled, so these methods are no longer needed.
Update to [version 5.0.0](https://www.nuget.org/packages/Microsoft.Azure.Managem
Update to [version 1.0.0](https://pypi.org/project/azure-mgmt-hdinsight/1.0.0/) or later of the HDInsight SDK for Python. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationsOperations.get`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#get-resource-group-name--cluster-name--configuration-name--custom-headers-none--raw-false-operation-config-) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsOperations.list`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#list-resource-group-name--cluster-name--custom-headers-none--raw-false-operation-config-) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsOperations.list`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#list-resource-group-name--cluster-name--custom-headers-none--raw-false-operation-config-) going forward.ΓÇ» Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use [`ClusterOperations.get_gateway_settings`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.clustersoperations#get-gateway-settings-resource-group-name--cluster-name--custom-headers-none--raw-false-operation-config-). - [`ConfigurationsOperations.update`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.configurationsoperations#update-resource-group-name--cluster-name--configuration-name--parameters--custom-headers-none--raw-false--polling-true-operation-config-) is now deprecated and has been replaced by [`ClusterOperations.update_gateway_settings`](/python/api/azure-mgmt-hdinsight/azure.mgmt.hdinsight.operations.clustersoperations#update-gateway-settings-resource-group-name--cluster-name--parameters--custom-headers-none--raw-false--polling-true-operation-config-).
Update to [version 1.0.0](https://search.maven.org/artifact/com.microsoft.azure.
Update to [version 27.1.0](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/hdinsight) or later of the HDInsight SDK for Go. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationsClient.get`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.Get) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway).
- - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsClient.list`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.List) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster.
+ - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsClient.list`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.List) going forward. Users with the 'Reader' role are not able to use this method. It allows for granular control over which users can access sensitive information for a cluster.
- To retrieve just HTTP gateway credentials, use [`ClustersClient.get_gateway_settings`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ClustersClient.GetGatewaySettings). - [`ConfigurationsClient.update`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.Update) is now deprecated and has been replaced by [`ClustersClient.update_gateway_settings`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ClustersClient.UpdateGatewaySettings).
Update to [version 27.1.0](https://github.com/Azure/azure-sdk-for-go/tree/main/s
Update to [Az PowerShell version 2.0.0](https://www.powershellgallery.com/packages/Az) or later to avoid interruptions. Minimal code modifications may be required if you are using a method affected by these changes. - `Grant-AzHDInsightHttpServicesAccess` is now deprecated and has been replaced by the new `Set-AzHDInsightGatewayCredential` cmdlet. - `Get-AzHDInsightJobOutput` has been updated to support granular role-based access to the storage key.
- - Users with HDInsight Cluster Operator, Contributor, or Owner roles will not be affected.
- - Users with only the Reader role will need to specify the `DefaultStorageAccountKey` parameter explicitly.
+ - Users with HDInsight Cluster Operator, Contributor, or Owner roles are not affected.
+ - Users with only the Reader role need to specify the `DefaultStorageAccountKey` parameter explicitly.
- `Revoke-AzHDInsightHttpServicesAccess` is now deprecated. HTTP is now always enabled, so this cmdlet is no longer needed. See the [az.HDInsight migration guide](https://github.com/Azure/azure-powershell/blob/master/documentation/migration-guides/Az.2.0.0-migration-guide.md#azhdinsight) for more details.
A user with the [Owner](../role-based-access-control/built-in-roles.md#owner) ro
The simplest way to add this role assignment is by using the `az role assignment create` command in Azure CLI. > [!NOTE]
-> This command must be run by a user with the Owner role, as only they can grant these permissions. The `--assignee` is the name of the service principal or email address of the user to whom you want to assign the HDInsight Cluster Operator role. If you receive an insufficient permissions error, see the FAQ below.
+> This command must be run by a user with the Owner role, as only they can grant these permissions. The `--assignee` is the name of the service principal or email address of the user to whom you want to assign the HDInsight Cluster Operator role. If you receive an insufficient permissions error, see the FAQ.
#### Grant role at the resource (cluster) level
Cluster configurations are now behind granular role-based access control and req
In addition to having the Owner role, the user or service principal executing the command needs to have sufficient Azure AD permissions to look up the object IDs of the assignee. This message indicates insufficient Azure AD permissions. Try replacing the `-ΓÇôassignee` argument with `ΓÇôassignee-object-id` and provide the object ID of the assignee as the parameter instead of the name (or the principal ID in the case of a managed identity). See the optional parameters section of the [az role assignment create documentation](/cli/azure/role/assignment#az-role-assignment-create) for more info.
-If this still doesn't work, contact your Azure AD admin to acquire the correct permissions.
+If it still does not work, contact your Azure AD admin to acquire the correct permissions.
### What will happen if I take no action? Beginning on September 3, 2019, `GET /configurations` and `POST /configurations/gateway` calls will no longer return any information and the `GET /configurations/{configurationName}` call will no longer return sensitive parameters, such as storage account keys or the cluster password. The same is true of corresponding SDK methods and PowerShell cmdlets.
-If you are using an older version of one of the tools for Visual Studio, VSCode, IntelliJ or Eclipse mentioned above, they will no longer function until you update.
+If you are using an older version of one of the tools for Visual Studio, VSCode, IntelliJ or Eclipse mentioned, it is no longer function until you update.
For more detailed information, see the corresponding section of this document for your scenario.
hdinsight Network Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/network-virtual-appliance.md
Title: Configure network virtual appliance in Azure HDInsight
-description: Learn how to configure a number of additional features for your network virtual appliance in Azure HDInsight.
+description: Learn how to configure extra features for your network virtual appliance in Azure HDInsight.
Previously updated : 08/30/2022 Last updated : 09/20/2023 # Configure network virtual appliance in Azure HDInsight
Last updated 08/30/2022
> [!Important] > The following information is **only** required if you wish to configure a network virtual appliance (NVA) other than [Azure Firewall](./hdinsight-restrict-outbound-traffic.md).
-Azure Firewall FQDN tag is automatically configured to allow traffic for many of the common important FQDNs. Using another network virtual appliance will require you to configure a number of additional features. Keep the following factors in mind as you configure your network virtual appliance:
+Azure Firewall FQDN tag is automatically configured to allow traffic for many of the common important FQDNs. Using another network virtual appliance requires you to configure extra features. Keep the following factors in mind as you configure your network virtual appliance:
* Service Endpoint capable services can be configured with service endpoints that results in bypassing the NVA, usually for cost or performance considerations. * If ResourceProviderConnection is set to *outbound*, you can use private endpoints for the storage and SQL servers for metastores and there is no need to add them to the NVA.
Azure Firewall FQDN tag is automatically configured to allow traffic for many of
## Service endpoint capable dependencies
-You can optionally enable one or more of the following service endpoints which will result in bypassing the NVA. This option can be useful for large amounts of data transfers to save on cost and also for performance optimizations.
+You can optionally enable one or more of the following service endpoints, which result in bypassing the NVA. This option can be useful for large amounts of data transfers to save on cost and also for performance optimizations.
| **Endpoint** | ||
You can optionally enable one or more of the following service endpoints which w
| **Endpoint** | **Details** | |||
-| IPs published [here](hdinsight-management-ip-addresses.md) | These IPs are for HDInsight resource provider and should be included in the UDR to avoid asymmetric routing. This rule is only needed if the ResourceProviderConnection is set to *Inbound*. If the ResourceProviderConnection is set to *Outbound* then these IPs are not needed in the UDR. |
-| AAD-DS private IPs | Only needed for ESP clusters, if the VNETs are not peered.|
+| IPs published [here](hdinsight-management-ip-addresses.md) | These IPs are for HDInsight resource provider and should be included in the UDR to avoid asymmetric routing. This rule is only needed if the ResourceProviderConnection is set to *Inbound*. If the ResourceProviderConnection is set to *Outbound*, then these IPs are not needed in the UDR. |
+| AAD-DS private IPs | Only need for ESP clusters, if the VNETs are not peered.|
### FQDN HTTP/HTTPS dependencies
-You can get the list of dependent FQDNs (mostly Azure Storage and Azure Service Bus) for configuring your network virtual appliance [in this repo](https://github.com/Azure-Samples/hdinsight-fqdn-lists/). For the regional list see [here](https://github.com/Azure-Samples/hdinsight-fqdn-lists/tree/main/Public). These dependencies are used by HDInsight resource provider(RP) to create and monitor/manage clusters successfully. These include telemetry/diagnostic logs, provisioning metadata, cluster-related configurations, scripts, etc. This FQDN dependency list might change with releasing future HDInsight updates.
+You can get the list of dependent FQDNs (mostly Azure Storage and Azure Service Bus) for configuring your network virtual appliance [in this repo](https://github.com/Azure-Samples/hdinsight-fqdn-lists/). For the regional list, see [here](https://github.com/Azure-Samples/hdinsight-fqdn-lists/tree/main/Public). These dependencies are used by HDInsight resource provider(RP) to create and monitor/manage clusters successfully. These include telemetry/diagnostic logs, provisioning metadata, cluster-related configurations, scripts, etc. This FQDN dependency list might change with releasing future HDInsight updates.
-The list below only gives a few FQDNs that may be needed for OS and security patching or certificate validations during the cluster create process and during the lifetime of cluster operations:
+The following list gives a few FQDNs that may be needed for OS and security patching or certificate validations during the cluster create process and during the lifetime of cluster operations:
| **Runtime Dependencies FQDNs** | ||
hdinsight Troubleshoot Debug Wasb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/troubleshoot-debug-wasb.md
Title: Debug WASB file operations in Azure HDInsight
description: Describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. Previously updated : 07/19/2022 Last updated : 09/19/2023 # Debug WASB file operations in Azure HDInsight There are times when you may want to understand what operations the WASB driver started with Azure Storage. For the client side, the WASB driver produces logs for each file system operation at **DEBUG** level. WASB driver uses log4j to control logging level and the default is **INFO** level. For Azure Storage server-side analytics logs, see [Azure Storage analytics logging](../../storage/common/storage-analytics-logging.md).
-A produced log will look similar to:
+A produced log looks similar to:
```log 18/05/13 04:15:55 DEBUG NativeAzureFileSystem: Moving wasb://xxx@yyy.blob.core.windows.net/user/livy/ulysses.txt/_temporary/0/_temporary/attempt_20180513041552_0000_m_000000_0/part-00000 to wasb://xxx@yyy.blob.core.windows.net/user/livy/ulysses.txt/part-00000
A produced log will look similar to:
## Additional logging
-The above logs should provide high-level understanding of the file system operations. If the above logs are still not providing useful information, or if you want to investigate blob storage api calls, add `fs.azure.storage.client.logging=true` to the `core-site`. This setting will enable the Java sdk logs for wasb storage driver and will print each call to blob storage server. Remove the setting after investigations because it could fill up the disk quickly and could slow down the process.
+The above logs should provide high-level understanding of the file system operations. If the above logs are still not providing useful information, or if you want to investigate blob storage api calls, add `fs.azure.storage.client.logging=true` to the `core-site`. This setting enables the Java SDK logs for wasb storage driver and print each call to blob storage server. Remove the setting after investigations because it could fill up the disk quickly and could slow down the process.
If the backend is Azure Data Lake based, then use the following log4j setting for the component(for example, spark/tez/hdfs):
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
Title: Purge history operation for Azure API for FHIR
+ Title: History Management in Azure API for FHIR
description: This article describes the $purge-history operation for Azure API for FHIR.
Last updated 09/27/2023
-# Purge history operation for Azure API for FHIR
+# History management for Azure API for FHIR
[!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)]
-`$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
+History in FHIR gives you the ability to see all previous versions of a resource. History in FHIR can be queried at the resource level, type level, or system level. The HL7 FHIR documentation has more information about the [history interaction](https://www.hl7.org/fhir/http.html#history). History is useful in scenarios where you want to see the evolution of a resource in FHIR or if you want to see the information of a resource at a specific point in time.
+
+All past versions of a resource are considered obsolete and the current version of a resource should be used for normal business workflow operations. However, it can be useful to see the state of a resource as a point in time when a past decision was made.
+
+Azure API for FHIR allows you to manage history with
+1. Disabling history
+ To disable history, one time support ticket needs to be created. After disable history configuration is set, history isn't created for resources on the FHIR server. Resource version is incremented.
+ Disabling history won't remove the existing history for any resources in your FHIR service. If you're looking to delete the existing history data in your FHIR service, you must use the $purge-history operation.
+
+1. Purge History: `$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
## Overview of purge history
For example:
```http DELETE https://workspace-fhir.fhir.azurehealthcareapis.com/Observation/123/$purge-history ```- ## Next steps In this article, you learned how to purge the history for resources in Azure API for FHIR. For more information about Azure API for FHIR, see
In this article, you learned how to purge the history for resources in Azure API
>[!div class="nextstepaction"] >[FHIR REST API capabilities for Azure API for FHIR](fhir-rest-api-capabilities.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP
|--| |[The maximum number of resource type iotconnectors/fhirdestinations has been reached.](#the-maximum-number-of-resource-type-iotconnectorsdestinations-has-been-reached)| |[The fhirServiceResourceId provided is invalid.](#the-fhirserviceresourceid-provided-is-invalid)|
-|[Ancestor resources must be fully provisioned before a child resource can be provisioned.](#ancestor-resources-must-be-fully-provisioned-before-a-child-resource-can-be-provisioned-1)
-|[The location property of child resources must match the location property of parent resources.](#the-location-property-of-child-resources-must-match-the-location-property-of-parent-resources-1)
+|[Ancestor resources must be fully provisioned before a child resource can be provisioned.](#ancestor-resources-must-be-fully-provisioned-before-a-child-resource-can-be-provisioned-1)|
+|[The location property of child resources must match the location property of parent resources.](#the-location-property-of-child-resources-must-match-the-location-property-of-parent-resources-1)|
### The maximum number of resource type iotconnectors/destinations has been reached
healthcare-apis Troubleshoot Errors Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md
The errors' names are listed in the following table, and the fixes for them are
|[InvalidFhirServiceException](#invalidfhirserviceexception)| |[InvalidQuantityFhirValueException](#invalidquantityfhirvalueexception)| |[InvalidTemplateException](#invalidtemplateexception)|
-|[ManagedIdentityCredentialNotFound](#managedidentitycredentialnotfound)
+|[ManagedIdentityCredentialNotFound](#managedidentitycredentialnotfound)|
|[MultipleResourceFoundException](#multipleresourcefoundexception)| |[NormalizationDataMappingException](#normalizationdatamappingexception)| |[PatientDeviceMismatchException](#patientdevicemismatchexception)|
To learn about the MedTech service frequently asked questions (FAQs), see
> [!div class="nextstepaction"] > [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-develop Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/iot-device-selection.md
Title: Azure IOT prototyping device selection list description: This document provides guidance on choosing a hardware device for prototyping IoT Azure solutions.--++ Previously updated : 08/03/2022 Last updated : 09/29/2023 # IoT device selection list
All boards listed support users of all experience levels.
[^1]: *If you're new to hardware programming, for MCU dev work we recommend using VS Code Arduino Extension or VS Code Platform IO Extension. For SBC dev work, you program the device like you would a laptop, that is, on the device itself. The Raspberry Pi supports VS Code development.*
-[^2]: *Devices were chosen based on availability of support resources, common boards used for prototyping and PoCs, and boards that support beginner-friendly IDEs like Arduino IDE and VS Code extensions; for example, Arduino Extension and Platform IO extension. For simplicity, we aimed to keep the total device list <6. Some of these metrics are "squishy," which means that other teams and individuals may have chosen to feature different boards based on their interpretation of the criteria.*
+[^2]: *Devices in the availability of support resources, common boards used for prototyping and PoCs, and boards that support beginner-friendly IDEs like Arduino IDE and VS Code extensions; for example, Arduino Extension and Platform IO extension. For simplicity, we aimed to keep the total device list <6. Other teams and individuals may have chosen to feature different boards based on their interpretation of the criteria.*
-[^3]: *For bringing devices to production, you'll likely want to test a PoC with a specific chipset, ST's STM32 or Microchip's Pic-IoT breakout board series, design a custom board that can be manufactured for lower cost than the MCUs and SBCs listed here, or even explore FPGA-based dev kits. You may also want to use a development environment for professional electrical engineering like STM32CubeMX or ARM mBed browser-based programmer.*
+[^3]: *For bringing devices to production, you likely want to test a PoC with a specific chipset, ST's STM32 or Microchip's Pic-IoT breakout board series, design a custom board that can be manufactured for lower cost than the MCUs and SBCs listed here, or even explore FPGA-based dev kits. You may also want to use a development environment for professional electrical engineering like STM32CubeMX or ARM mBed browser-based programmer.*
## Contents
All boards listed support users of all experience levels.
Use this document to better understand IoT terminology, device selection considerations, and to choose an IoT device for prototyping or building a proof-of-concept. We recommend the following procedure:
-1. Read through the 'what to consider when choosing a board' section below to identify needs and constraints.
+1. Read through the 'what to consider when choosing a board' section to identify needs and constraints.
2. Use the Application Selection Visual to identify possible options for your IoT scenario.
Use this document to better understand IoT terminology, device selection conside
### What to consider when choosing a board
-Below are some suggestions for criteria to consider when choosing a device for your IoT prototype.
+To choose a device for your IoT prototype, see the following criteria:
- **Microcontroller unit (MCU) or single board computer (SBC)** - An MCU is preferred for single tasks, like gathering and uploading sensor data or machine learning at the edge. MCUs also tend to be lower cost.
- - An SBC is preferred when you need multiple different tasks, like gathering sensor data and controlling another device. It may also be preferred in the early stages when there are many options for possible solutions - an SBC will enable you to try lots of different approaches.
+ - An SBC is preferred when you need multiple different tasks, like gathering sensor data and controlling another device. It may also be preferred in the early stages when there are many options for possible solutions - an SBC enables you to try lots of different approaches.
- **Processing power**
Below are some suggestions for criteria to consider when choosing a device for y
- **Power consumption**
- - **Power**: Consider how much voltage and current the board consumes. Determine if wall power is readily available or if you'll need a battery for your application.
+ - **Power**: Consider how much voltage and current the board consumes. Determine if wall power is readily available or if you need a battery for your application.
- **Connection**: Consider the physical connection to the power source. If you need battery power, check if there's a battery connection port available on the board. If there's no battery connector, seek another comparable board, or consider other ways to add battery power to your device. - **Inputs and outputs** - **Ports and pins**: Consider how many and of what types of ports and I/O pins your project may require.
- * Additional considerations include if your device will be communicating with other sensors or devices. If so, identify how many ports those signals require.
+ * Other considerations include if your device will be communicating with other sensors or devices. If so, identify how many ports those signals require.
- **Protocols**: If you're working with other sensors or devices, consider what hardware communication protocols are required. * For example, you may need CAN, UART, SPI, I2C, or other communication protocols.
Below are some suggestions for criteria to consider when choosing a device for y
- **Networking**: Consider if your device is connected to an external network or if it can be kept behind a router and/or firewall. If your prototype needs to be connected to an externally facing network, we recommend using the Azure Sphere as it is the only reliably secure device.
- - **Peripherals**: Consider if any of the peripherals your device connects to will have wireless protocols (for example, WiFi, BLE).
+ - **Peripherals**: Consider if any of the peripherals your device connects to have wireless protocols (for example, WiFi, BLE).
- **Physical location**: Consider if your device or any of the peripherals it's connected to will be accessible to the public. If so, we recommend making the device physically inaccessible. For example, in a closed, locked box.
Terminology and acronyms are listed in alphabetical order.
## MCU device list
-Following is a comparison table of MCUs in alphabetical order. Please note this is an intentionally brief list, it isn't intended to be exhaustive.
+Following is a comparison table of MCUs in alphabetical order. The list isn't not intended to be exhaustive.
>[!NOTE] >This list is for educational purposes only, it is not intended to endorse any products. Prices shown represent the average across multiple distributors and are for illustrative purposes only.
Following is a comparison table of MCUs in alphabetical order. Please note this
## SBC device list
-Following is a comparison table of SBCs in alphabetical order. Note this is an intentionally brief list, it isn't intended to be exhaustive.
+Following is a comparison table of SBCs in alphabetical order. This list isn't intended to be exhaustive.
>[!NOTE] >This list is for educational purposes only, it is not intended to endorse any products. Prices shown represent the average across multiple distributors and are for illustrative purposes only.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you prepare a development environment that's used to build the
When specifying the path used with `-Dhsm_custom_lib` in the following command, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
- **Windows:**
+ # [Windows](#tab/windows)
```cmd cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib .. ```
- **Linux:**
+ # [Linux](#tab/linux)
```bash cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=/home/<USER>/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/custom_hsm_example.a .. ```
+
+ >[!TIP] >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
iot-hub Authenticate Authorize Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-azure-ad.md
+
+ Title: Control access with Azure Active Directory
+
+description: Understand how Azure IoT Hub uses Azure Active Directory to authenticate identities and authorize access to IoT hubs and devices.
+++++ Last updated : 09/01/2023+++
+# Control access to IoT Hub by using Azure Active Directory
+
+You can use Azure Active Directory (Azure AD) to authenticate requests to Azure IoT Hub service APIs, like **create device identity** and **invoke direct method**. You can also use Azure role-based access control (Azure RBAC) to authorize those same service APIs. By using these technologies together, you can grant permissions to access IoT Hub service APIs to an Azure AD security principal. This security principal could be a user, group, or application service principal.
+
+Authenticating access by using Azure AD and controlling permissions by using Azure RBAC provides improved security and ease of use over security tokens. To minimize potential security issues inherent in security tokens, we recommend that you [enforce Azure AD authentication whenever possible](#enforce-azure-ad-authentication).
+
+> [!NOTE]
+> Authentication with Azure AD isn't supported for the IoT Hub *device APIs* (like device-to-cloud messages and update reported properties). Use [symmetric keys](authenticate-authorize-sas.md) or [X.509](authenticate-authorize-x509.md) to authenticate devices to IoT Hub.
+
+## Authentication and authorization
+
+*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*.
+
+When an Azure AD security principal requests access to an IoT Hub service API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
+
+After the Azure AD principal is authenticated, the next step is *authorization*. In this step, IoT Hub uses the Azure AD role assignment service to determine what permissions the principal has. If the principal's permissions match the requested resource or API, IoT Hub authorizes the request. So this step requires one or more Azure roles to be assigned to the security principal. IoT Hub provides some built-in roles that have common groups of permissions.
+
+## Manage access to IoT Hub by using Azure RBAC role assignment
+
+With Azure AD and RBAC, IoT Hub requires the principal requesting the API to have the appropriate level of permission for authorization. To give the principal the permission, give it a role assignment.
+
+- If the principal is a user, group, or application service principal, follow the guidance in [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- If the principal is a managed identity, follow the guidance in [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
+
+To ensure least privilege, always assign the appropriate role at the lowest possible [resource scope](#resource-scope), which is probably the IoT Hub scope.
+
+IoT Hub provides the following Azure built-in roles for authorizing access to IoT Hub service APIs by using Azure AD and RBAC:
+
+| Role | Description |
+| - | -- |
+| [IoT Hub Data Contributor](../role-based-access-control/built-in-roles.md#iot-hub-data-contributor) | Allows full access to IoT Hub data plane operations. |
+| [IoT Hub Data Reader](../role-based-access-control/built-in-roles.md#iot-hub-data-reader) | Allows full read access to IoT Hub data plane properties. |
+| [IoT Hub Registry Contributor](../role-based-access-control/built-in-roles.md#iot-hub-registry-contributor) | Allows full access to the IoT Hub device registry. |
+| [IoT Hub Twin Contributor](../role-based-access-control/built-in-roles.md#iot-hub-twin-contributor) | Allows read and write access to all IoT Hub device and module twins. |
+
+You can also define custom roles to use with IoT Hub by combining the [permissions](#permissions-for-iot-hub-service-apis) that you need. For more information, see [Create custom roles for Azure role-based access control](../role-based-access-control/custom-roles.md).
+
+### Resource scope
+
+Before you assign an Azure RBAC role to a security principal, determine the scope of access that the security principal should have. It's always best to grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
+
+This list describes the levels at which you can scope access to IoT Hub, starting with the narrowest scope:
+
+- **The IoT hub.** At this scope, a role assignment applies to the IoT hub. There's no scope smaller than an individual IoT hub. Role assignment at smaller scopes, like individual device identity or twin section, isn't supported.
+- **The resource group.** At this scope, a role assignment applies to all IoT hubs in the resource group.
+- **The subscription.** At this scope, a role assignment applies to all IoT hubs in all resource groups in the subscription.
+- **A management group.** At this scope, a role assignment applies to all IoT hubs in all resource groups in all subscriptions in the management group.
+
+## Permissions for IoT Hub service APIs
+
+The following table describes the permissions available for IoT Hub service API operations. To enable a client to call a particular operation, ensure that the client's assigned RBAC role offers sufficient permissions for the operation.
+
+| RBAC action | Description |
+|-|-|
+| `Microsoft.Devices/IotHubs/devices/read` | Read any device or module identity. |
+| `Microsoft.Devices/IotHubs/devices/write` | Create or update any device or module identity. |
+| `Microsoft.Devices/IotHubs/devices/delete` | Delete any device or module identity. |
+| `Microsoft.Devices/IotHubs/twins/read` | Read any device or module twin. |
+| `Microsoft.Devices/IotHubs/twins/write` | Write any device or module twin. |
+| `Microsoft.Devices/IotHubs/jobs/read` | Return a list of jobs. |
+| `Microsoft.Devices/IotHubs/jobs/write` | Create or update any job. |
+| `Microsoft.Devices/IotHubs/jobs/delete` | Delete any job. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action` | Send a cloud-to-device message to any device. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/feedback/action` | Receive, complete, or abandon a cloud-to-device message feedback notification. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action` | Delete all the pending commands for a device. |
+| `Microsoft.Devices/IotHubs/directMethods/invoke/action` | Invoke a direct method on any device or module. |
+| `Microsoft.Devices/IotHubs/fileUpload/notifications/action` | Receive, complete, or abandon file upload notifications. |
+| `Microsoft.Devices/IotHubs/statistics/read` | Read device and service statistics. |
+| `Microsoft.Devices/IotHubs/configurations/read` | Read device management configurations. |
+| `Microsoft.Devices/IotHubs/configurations/write` | Create or update device management configurations. |
+| `Microsoft.Devices/IotHubs/configurations/delete` | Delete any device management configuration. |
+| `Microsoft.Devices/IotHubs/configurations/applyToEdgeDevice/action` | Apply the configuration content to an edge device. |
+| `Microsoft.Devices/IotHubs/configurations/testQueries/action` | Validate the target condition and custom metric queries for a configuration. |
+
+> [!TIP]
+> - The [Bulk Registry Update](/rest/api/iothub/service/bulkregistry/updateregistry) operation requires both `Microsoft.Devices/IotHubs/devices/write` and `Microsoft.Devices/IotHubs/devices/delete`.
+> - The [Twin Query](/rest/api/iothub/service/query/gettwins) operation requires `Microsoft.Devices/IotHubs/twins/read`.
+> - [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) requires `Microsoft.Devices/IotHubs/twins/read`. [Update Digital Twin](/rest/api/iothub/service/digitaltwin/updatedigitaltwin) requires `Microsoft.Devices/IotHubs/twins/write`.
+> - Both [Invoke Component Command](/rest/api/iothub/service/digitaltwin/invokecomponentcommand) and [Invoke Root Level Command](/rest/api/iothub/service/digitaltwin/invokerootlevelcommand) require `Microsoft.Devices/IotHubs/directMethods/invoke/action`.
+
+> [!NOTE]
+> To get data from IoT Hub by using Azure AD, [set up routing to a separate event hub](iot-hub-devguide-messages-d2c.md#event-hubs-as-a-routing-endpoint). To access the [the built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before.
+
+## Enforce Azure AD authentication
+
+By default, IoT Hub supports service API access through both Azure AD and [shared access policies and security tokens](authenticate-authorize-sas.md). To minimize potential security vulnerabilities inherent in security tokens, you can disable access with shared access policies.
+
+ > [!WARNING]
+ > By denying connections using shared access policies, all users and services that connect using this method lose access immediately. Notably, since Device Provisioning Service (DPS) only supports linking IoT hubs using shared access policies, all device provisioning flows will fail with "unauthorized" error. Proceed carefully and plan to replace access with Azure AD role based access. **Do not proceed if you use DPS**.
+
+1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-by-using-azure-rbac-role-assignment) to your IoT hub. Follow the [principle of least privilege](../security/fundamentals/identity-management-best-practices.md).
+1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
+1. On the left pane, select **Shared access policies**.
+1. Under **Connect using shared access policies**, select **Deny**, and review the warning.
+ :::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot that shows how to turn off IoT Hub shared access policies." border="true":::
+
+Your IoT Hub service APIs can now be accessed only through Azure AD and RBAC.
+
+## Azure AD access from the Azure portal
+
+You can provide access to IoT Hub from the Azure portal with either shared access policies or Azure AD permissions.
+
+When you try to access IoT Hub from the Azure portal, the Azure portal first checks whether you've been assigned an Azure role with `Microsoft.Devices/iotHubs/listkeys/action`. If you have, the Azure portal uses the keys from shared access policies to access IoT Hub. If not, the Azure portal tries to access data by using your Azure AD account.
+
+To access IoT Hub from the Azure portal by using your Azure AD account, you need permissions to access IoT Hub data resources (like devices and twins). You also need permissions to go to the IoT Hub resource in the Azure portal. The built-in roles provided by IoT Hub grant access to resources like devices and twin but they don't grant access to the IoT Hub resource. So access to the portal also requires the assignment of an Azure Resource Manager role like [Reader](../role-based-access-control/built-in-roles.md#reader). The reader role is a good choice because it's the most restricted role that lets you navigate the portal. It doesn't include the `Microsoft.Devices/iotHubs/listkeys/action` permission (which provides access to all IoT Hub data resources via shared access policies).
+
+To ensure an account doesn't have access outside of the assigned permissions, don't include the `Microsoft.Devices/iotHubs/listkeys/action` permission when you create a custom role. For example, to create a custom role that can read device identities but can't create or delete devices, create a custom role that:
+
+- Has the `Microsoft.Devices/IotHubs/devices/read` data action.
+- Doesn't have the `Microsoft.Devices/IotHubs/devices/write` data action.
+- Doesn't have the `Microsoft.Devices/IotHubs/devices/delete` data action.
+- Doesn't have the `Microsoft.Devices/iotHubs/listkeys/action` action.
+
+Then, make sure the account doesn't have any other roles that have the `Microsoft.Devices/iotHubs/listkeys/action` permission, like [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). To allow the account to have resource access and navigate the portal, assign [Reader](../role-based-access-control/built-in-roles.md#reader).
+
+## Azure AD access from Azure CLI
+
+Most commands against IoT Hub support Azure AD authentication. You can control the type of authentication used to run commands by using the `--auth-type` parameter, which accepts `key` or `login` values. The `key` value is the default.
+
+- When `--auth-type` has the `key` value, as before, the CLI automatically discovers a suitable policy when it interacts with IoT Hub.
+
+- When `--auth-type` has the `login` value, an access token from the Azure CLI logged in the principal is used for the operation.
+
+For more information, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.10.12).
+
+## SDK samples
+
+- [.NET SDK sample](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/service/samples/how%20to%20guides/RoleBasedAuthenticationSample/Program.cs)
+- [Java SDK sample](https://github.com/Azure/azure-iot-service-sdk-java/tree/main/service/iot-service-samples/role-based-authorization-sample)
+
+## Next steps
+
+- For more information on the advantages of using Azure AD in your application, see [Integrating with Azure Active Directory](../active-directory/develop/how-to-integrate.md).
+- For more information on requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md).
+
+Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
iot-hub Authenticate Authorize Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-sas.md
+
+ Title: Control access with shared access signatures
+
+description: Understand how Azure IoT Hub uses shared access signatures (SAS) to authenticate identities and authorize access to IoT hubs and devices.
++++ Last updated : 09/01/2023+++
+# Control access to IoT Hub with shared access signatures
+
+IoT Hub uses shared access signature (SAS) tokens to authenticate devices and services to avoid sending keys on the wire. You use SAS tokens to grant time-bounded access to devices and services to specific functionality in IoT Hub. To get authorization to connect to IoT Hub, devices and services must send SAS tokens signed with either a shared access or symmetric key. Symmetric keys are stored with a device identity in the identity registry.
+
+This article introduces:
+
+* The different permissions that you can grant to a client to access your IoT hub.
+* The tokens IoT Hub uses to verify permissions.
+* How to scope credentials to limit access to specific resources.
+* Custom device authentication mechanisms that use existing device identity registries or authentication schemes.
++
+IoT Hub uses *permissions* to grant access to each IoT hub endpoint. Permissions limit access to an IoT hub based on functionality. You must have appropriate permissions to access any of the IoT Hub endpoints. For example, a device must include a token containing security credentials along with every message it sends to IoT Hub. However, the signing keys, like the device symmetric keys, are never sent over the wire.
+
+## Authentication and authorization
+
+*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*.
+
+This article describes authentication and authorization using **Shared access signatures**, which lets you group permissions and grant them to applications using access keys and signed security tokens. You can also use symmetric keys or shared access keys to authenticate a device with IoT Hub. SAS tokens provide authentication for each call made by the device to IoT Hub by associating the symmetric key to each call.
+
+## Access control and permissions
+
+Use shared access policies for IoT hub-level access, and use the individual device credentials to scope access to that device only.
+
+### IoT hub-level shared access policies
+
+Shared access policies can grant any combination of permissions. You can define policies in the [Azure portal](https://portal.azure.com), programmatically by using the [IoT Hub Resource REST APIs](/rest/api/iothub/iothubresource), or using the Azure CLI [az iot hub policy](/cli/azure/iot/hub/policy) command. A newly created IoT hub has the following default policies:
+
+| Shared Access Policy | Permissions |
+| -- | -- |
+| iothubowner | All permissions |
+| service | **ServiceConnect** permissions |
+| device | **DeviceConnect** permissions |
+| registryRead | **RegistryRead** permissions |
+| registryReadWrite | **RegistryRead** and **RegistryWrite** permissions |
+
+You can use the following permissions to control access to your IoT hub:
+
+* The **ServiceConnect** permission is used by back-end cloud services and grants the following access:
+ * Access to cloud service-facing communication and monitoring endpoints.
+ * Receive device-to-cloud messages, send cloud-to-device messages, and retrieve the corresponding delivery acknowledgments.
+ * Retrieve delivery acknowledgments for file uploads.
+ * Access twins to update tags and desired properties, retrieve reported properties, and run queries.
+
+* The **DeviceConnect** permission is used by devices and grants the following access:
+ * Access to device-facing endpoints.
+ * Send device-to-cloud messages and receive cloud-to-device messages.
+ * Perform file upload.
+ * Receive device twin desired property notifications and update device twin reported properties.
+
+* The **RegistryRead** permission is used by back-end cloud services and grants the following access:
+ * Read access to the identity registry. For more information, see [Identity registry](iot-hub-devguide-identity-registry.md).
+
+* The **RegistryReadWrite** permission is used by back-end cloud services and grants the following access:
+ * Read and write access to the identity registry. For more information, see [Identity registry](iot-hub-devguide-identity-registry.md).
+
+### Per-device security credentials
+
+Every IoT hub has an identity registry that stores information about the devices and modules permitted to connect to it. Before a device or module can connect, there must be an entry for that device or module in the IoT hub's identity registry. A device or module authenticates with the IoT hub based on credentials stored in the identity registry.
+
+When you register a device to use SAS token authentication, that device gets two *symmetric keys*. Symmetric keys grant the **DeviceConnect** permission for the associated device identity.
+
+## Use SAS tokens from services
+
+Services can generate SAS tokens by using a shared access policy that defines the appropriate permissions as explained previously in the [Access control and permissions](#access-control-and-permissions) section.
+
+As an example, a service using the precreated shared access policy called **registryRead** would create a token with the following parameters:
+
+* resource URI: `{IoT hub name}.azure-devices.net`,
+* signing key: one of the keys of the `registryRead` policy,
+* policy name: `registryRead`,
+* any expiration time.
+
+For example, the following code creates a SAS token in Node.js:
+
+```javascript
+var endpoint = "myhub.azure-devices.net";
+var policyName = 'registryRead';
+var policyKey = '...';
+
+var token = generateSasToken(endpoint, policyKey, policyName, 60);
+```
+
+The result, which grants access to read all device identities in the identity registry, would be:
+
+`SharedAccessSignature sr=myhub.azure-devices.net&sig=JdyscqTpXdEJs49elIUCcohw2DlFDR3zfH5KqGJo4r4%3D&se=1456973447&skn=registryRead`
+
+For more examples, see [Generate SAS tokens](#generate-sas-tokens).
+
+For services, SAS tokens only grant permissions at the IoT Hub level. That is, a service authenticating with a token based on the **service** policy will be able to perform all the operations granted by the **ServiceConnect** permission. These operations include receiving device-to-cloud messages, sending cloud-to-device messages, and so on. If you want to grant more granular access to your services, for example, limiting a service to only sending cloud-to-device messages, you can use Azure Active Directory. To learn more, see [Authenticate with Azure AD](authenticate-authorize-azure-ad.md).
+
+## Use SAS tokens from devices
+
+There are two ways to obtain **DeviceConnect** permissions with IoT Hub with SAS tokens: use a [symmetric device key from the identity registry](#use-a-symmetric-key-in-the-identity-registry), or use a [shared access key](#use-a-shared-access-policy-to-access-on-behalf-of-a-device).
+
+All functionality accessible from devices is exposed by design on endpoints with the prefix `/devices/{deviceId}`.
+
+The device-facing endpoints are (irrespective of the protocol):
+
+| Endpoint | Functionality |
+| | |
+| `{iot hub name}/devices/{deviceId}/messages/events` |Send device-to-cloud messages. |
+| `{iot hub name}/devices/{deviceId}/messages/devicebound` |Receive cloud-to-device messages. |
+
+### Use a symmetric key in the identity registry
+
+When using a device identity's symmetric key to generate a token, the policyName (`skn`) element of the token is omitted.
+
+For example, a token created to access all device functionality should have the following parameters:
+
+* resource URI: `{IoT hub name}.azure-devices.net/devices/{device id}`,
+* signing key: any symmetric key for the `{device id}` identity,
+* no policy name,
+* any expiration time.
+
+For example, the following code creates a SAS token in Node.js:
+
+```javascript
+var endpoint ="myhub.azure-devices.net/devices/device1";
+var deviceKey ="...";
+
+var token = generateSasToken(endpoint, deviceKey, null, 60);
+```
+
+The result, which grants access to all functionality for device1, would be:
+
+`SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697`
+
+For more examples, see [Generate SAS tokens](#generate-sas-tokens).
+
+### Use a shared access policy to access on behalf of a device
+
+When you create a token from a shared access policy, set the `skn` field to the name of the policy. This policy must grant the **DeviceConnect** permission.
+
+The two main scenarios for using shared access policies to access device functionality are:
+
+* [cloud protocol gateways](iot-hub-devguide-endpoints.md),
+* [token services](#create-a-token-service-to-integrate-existing-devices) used to implement custom authentication schemes.
+
+Since the shared access policy can potentially grant access to connect as any device, it is important to use the correct resource URI when creating SAS tokens. This setting is especially important for token services, which have to scope the token to a specific device using the resource URI. This point is less relevant for protocol gateways as they are already mediating traffic for all devices.
+
+As an example, a token service using the precreated shared access policy called **device** would create a token with the following parameters:
+
+* resource URI: `{IoT hub name}.azure-devices.net/devices/{device id}`,
+* signing key: one of the keys of the `device` policy,
+* policy name: `device`,
+* any expiration time.
+
+For example, the following code creates a SAS token in Node.js:
+
+```javascript
+var endpoint ="myhub.azure-devices.net/devices/device1";
+var policyName = 'device';
+var policyKey = '...';
+
+var token = generateSasToken(endpoint, policyKey, policyName, 60);
+```
+
+The result, which grants access to all functionality for device1, would be:
+
+`SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697&skn=device`
+
+A protocol gateway could use the same token for all devices by setting the resource URI to `myhub.azure-devices.net/devices`.
+
+For more examples, see [Generate SAS tokens](#generate-sas-tokens).
+
+## Create a token service to integrate existing devices
+
+You can use the IoT Hub [identity registry](iot-hub-devguide-identity-registry.md) to configure per-device or per-module security credentials and access control using tokens. If an IoT solution already has a custom identity registry and/or authentication scheme, consider creating a *token service* to integrate this infrastructure with IoT Hub. In this way, you can use other IoT features in your solution.
+
+A token service is a custom cloud service. It uses an IoT Hub *shared access policy* with the **DeviceConnect** permission to create *device-scoped* or *module-scoped* tokens. These tokens enable a device or module to connect to your IoT hub.
+
+![Diagram that shows the steps of the token service pattern.](./media/iot-hub-devguide-security/tokenservice.png)
+
+Here are the main steps of the token service pattern:
+
+1. Create an IoT Hub shared access policy with the **DeviceConnect** permission for your IoT hub. You can create this policy in the Azure portal or programmatically. The token service uses this policy to sign the tokens it creates.
+
+2. When a device or module needs to access your IoT hub, it requests a signed token from your token service. The device can authenticate with your custom identity registry/authentication scheme to determine the device/module identity that the token service uses to create the token.
+
+3. The token service returns a token. The token is created by using `/devices/{deviceId}` or `/devices/{deviceId}/modules/{moduleId}` as `resourceURI`, with `deviceId` as the device being authenticated and `moduleId` as the module being authenticated. The token service uses the shared access policy to construct the token.
+
+4. The device/module uses the token directly with the IoT hub.
+
+> [!NOTE]
+> You can use the .NET class [SharedAccessSignatureBuilder](/dotnet/api/microsoft.azure.devices.common.security.sharedaccesssignaturebuilder) or the Java class [IotHubServiceSasToken](/java/api/com.microsoft.azure.sdk.iot.service.auth.iothubservicesastoken) to create a token in your token service.
+
+The token service can set the token expiration as desired. When the token expires, the IoT hub severs the device/module connection. Then, the device/module must request a new token from the token service. A short expiry time increases the load on both the device/module and the token service.
+
+For a device/module to connect to your hub, you must still add it to the IoT Hub identity registryΓÇöeven though it is using a token and not a key to connect. Therefore, you can continue to use per-device/per-module access control by enabling or disabling device/module identities in the identity registry. This approach mitigates the risks of using tokens with long expiry times.
+
+### Comparison with a custom gateway
+
+The token service pattern is the recommended way to implement a custom identity registry/authentication scheme with IoT Hub. This pattern is recommended because IoT Hub continues to handle most of the solution traffic. However, if the custom authentication scheme is so intertwined with the protocol, you may require a *custom gateway* to process all the traffic. An example of such a scenario is using [Transport Layer Security (TLS) and preshared keys (PSKs)](https://tools.ietf.org/html/rfc4279). For more information, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md).
+
+## Generate SAS tokens
+
+Azure IoT SDKs automatically generate tokens, but some scenarios do require you to generate and use SAS tokens directly, including:
+
+* The direct use of the MQTT, AMQP, or HTTPS surfaces.
+
+* The implementation of the token service pattern, as explained in the [Create a token service](#create-a-token-service-to-integrate-existing-devices) section.
+
+A token signed with a shared access key grants access to all the functionality associated with the shared access policy permissions. A token signed with a device identity's symmetric key only grants the **DeviceConnect** permission for the associated device identity.
+
+This section provides examples of generating SAS tokens in different code languages. You can also generate SAS tokens with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
+
+### SAS token structure
+
+A SAS token has the following format:
+
+`SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}`
+
+Here are the expected values:
+
+| Value | Description |
+| | |
+| {signature} |An HMAC-SHA256 signature string of the form: `{URL-encoded-resourceURI} + "\n" + expiry`. **Important**: The key is decoded from base64 and used as key to perform the HMAC-SHA256 computation. |
+| {resourceURI} |URI prefix (by segment) of the endpoints that can be accessed with this token, starting with host name of the IoT hub (no protocol). SAS tokens granted to backend services are scoped to the IoT hub level; for example, `myHub.azure-devices.net`. SAS tokens granted to devices must be scoped to an individual device; for example, `myHub.azure-devices.net/devices/device1`. |
+| {expiry} |UTF8 strings for number of seconds since the epoch 00:00:00 UTC on 1 January 1970. |
+| {URL-encoded-resourceURI} |Lower case URL-encoding of the lower case resource URI |
+| {policyName} |The name of the shared access policy to which this token refers. Absent if the token refers to device-registry credentials. |
+
+The URI prefix is computed by segment and not by character. For example, `/a/b` is a prefix for `/a/b/c` but not for `/a/bc`.
+
+### [Node.js](#tab/node)
+
+The following code generates a SAS token using the resource URI, signing key, policy name, and expiration period. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```javascript
+var generateSasToken = function(resourceUri, signingKey, policyName, expiresInMins) {
+ resourceUri = encodeURIComponent(resourceUri);
+
+ // Set expiration in seconds
+ var expires = (Date.now() / 1000) + expiresInMins * 60;
+ expires = Math.ceil(expires);
+ var toSign = resourceUri + '\n' + expires;
+
+ // Use crypto
+ var hmac = crypto.createHmac('sha256', Buffer.from(signingKey, 'base64'));
+ hmac.update(toSign);
+ var base64UriEncoded = encodeURIComponent(hmac.digest('base64'));
+
+ // Construct authorization string
+ var token = "SharedAccessSignature sr=" + resourceUri + "&sig="
+ + base64UriEncoded + "&se=" + expires;
+ if (policyName) token += "&skn="+policyName;
+ return token;
+};
+```
+
+### [Python](#tab/python)
+
+The following code generates a SAS token using the resource URI, signing key, policy name, and expiration period. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```python
+from base64 import b64encode, b64decode
+from hashlib import sha256
+from time import time
+from urllib import parse
+from hmac import HMAC
+
+def generate_sas_token(uri, key, policy_name, expiry=3600):
+ ttl = time() + expiry
+ sign_key = "%s\n%d" % ((parse.quote_plus(uri)), int(ttl))
+ print(sign_key)
+ signature = b64encode(HMAC(b64decode(key), sign_key.encode('utf-8'), sha256).digest())
+
+ rawtoken = {
+ 'sr' : uri,
+ 'sig': signature,
+ 'se' : str(int(ttl))
+ }
+
+ if policy_name is not None:
+ rawtoken['skn'] = policy_name
+
+ return 'SharedAccessSignature ' + parse.urlencode(rawtoken)
+```
+
+### [C#](#tab/csharp)
+
+The following code generates a SAS token using the resource URI, signing key, policy name, and expiration period. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```csharp
+using System;
+using System.Globalization;
+using System.Net;
+using System.Net.Http;
+using System.Security.Cryptography;
+using System.Text;
+
+public static string GenerateSasToken(string resourceUri, string key, string policyName, int expiryInSeconds = 3600)
+{
+ TimeSpan fromEpochStart = DateTime.UtcNow - new DateTime(1970, 1, 1);
+ string expiry = Convert.ToString((int)fromEpochStart.TotalSeconds + expiryInSeconds);
+
+ string stringToSign = WebUtility.UrlEncode(resourceUri) + "\n" + expiry;
+
+ HMACSHA256 hmac = new HMACSHA256(Convert.FromBase64String(key));
+ string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
+
+ string token = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}", WebUtility.UrlEncode(resourceUri), WebUtility.UrlEncode(signature), expiry);
+
+ if (!String.IsNullOrEmpty(policyName))
+ {
+ token += "&skn=" + policyName;
+ }
+
+ return token;
+}
+```
+
+### [Java](#tab/java)
+
+The following code generates a SAS token using the resource URI and signing key. The expiration period is set to one hour from the current time. The next sections detail how to initialize the different inputs for the different token use cases.
+
+```java
+public static String generateSasToken(String resourceUri, String key) throws Exception {
+ // Token will expire in one hour
+ var expiry = Instant.now().getEpochSecond() + 3600;
+
+ String stringToSign = URLEncoder.encode(resourceUri, StandardCharsets.UTF_8) + "\n" + expiry;
+ byte[] decodedKey = Base64.getDecoder().decode(key);
+
+ Mac sha256HMAC = Mac.getInstance("HmacSHA256");
+ SecretKeySpec secretKey = new SecretKeySpec(decodedKey, "HmacSHA256");
+ sha256HMAC.init(secretKey);
+ Base64.Encoder encoder = Base64.getEncoder();
+
+ String signature = new String(encoder.encode(
+ sha256HMAC.doFinal(stringToSign.getBytes(StandardCharsets.UTF_8))), StandardCharsets.UTF_8);
+
+ String token = "SharedAccessSignature sr=" + URLEncoder.encode(resourceUri, StandardCharsets.UTF_8)
+ + "&sig=" + URLEncoder.encode(signature, StandardCharsets.UTF_8.name()) + "&se=" + expiry;
+
+ return token;
+}
+```
++
+### Protocol specifics
+
+Each supported protocol, such as MQTT, AMQP, and HTTPS, transports tokens in different ways.
+
+When using MQTT, the CONNECT packet has the deviceId as the ClientId, `{iothubhostname}/{deviceId}` in the Username field, and a SAS token in the Password field. `{iothubhostname}` should be the full CName of the IoT hub (for example, myhub.azure-devices.net).
+
+When using [AMQP](https://www.amqp.org/), IoT Hub supports [SASL PLAIN](https://tools.ietf.org/html/rfc4616) and [AMQP Claims-Based-Security](https://www.oasis-open.org/committees/download.php/50506/amqp-cbs-v1%200-wd02%202013-08-12.doc).
+
+If you use AMQP claims-based-security, the standard specifies how to transmit these tokens.
+
+For SASL PLAIN, the **username** can be:
+
+* `{policyName}@sas.root.{iothubName}` if using IoT hub-level tokens.
+* `{deviceId}@sas.{iothubname}` if using device-scoped tokens.
+
+In both cases, the password field contains the token, as described in [SAS token structure](#sas-token-structure).
+
+HTTPS implements authentication by including a valid token in the **Authorization** request header.
+
+For example, Username (DeviceId is case-sensitive):
+`iothubname.azure-devices.net/DeviceId`
+
+Password (You can generate a SAS token with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)):
+
+`SharedAccessSignature sr=iothubname.azure-devices.net%2fdevices%2fDeviceId&sig=kPszxZZZZZZZZZZZZZZZZZAhLT%2bV7o%3d&se=1487709501`
+
+> [!NOTE]
+> The [Azure IoT SDKs](iot-hub-devguide-sdks.md) automatically generate tokens when connecting to the service. In some cases, the Azure IoT SDKs do not support all the protocols or all the authentication methods.
+
+### Special considerations for SASL PLAIN
+
+When using SASL PLAIN with AMQP, a client connecting to an IoT hub can use a single token for each TCP connection. When the token expires, the TCP connection disconnects from the service and triggers a reconnection. This behavior, while not problematic for a back-end app, is damaging for a device app for the following reasons:
+
+* Gateways usually connect on behalf of many devices. When using SASL PLAIN, they have to create a distinct TCP connection for each device connecting to an IoT hub. This scenario considerably increases the consumption of power and networking resources, and increases the latency of each device connection.
+
+* Resource-constrained devices are adversely affected by the increased use of resources to reconnect after each token expiration.
+
+## Next steps
+
+Now that you have learned how to control access IoT Hub, you may be interested in the following IoT Hub developer guide topics:
+
+* [Use device twins to synchronize state and configurations](iot-hub-devguide-device-twins.md)
+* [Invoke a direct method on a device](iot-hub-devguide-direct-methods.md)
+* [Schedule jobs on multiple devices](iot-hub-devguide-jobs.md)
iot-hub Authenticate Authorize X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-x509.md
+
+ Title: Authenticate with X.509 certificates
+
+description: Understand how Azure IoT Hub uses X.509 certificates to authenticate IoT hubs and devices.
+++++ Last updated : 09/01/2023+++
+# Authenticate identities with X.509 certificates
+
+IoT Hub uses X.509 certificates to authenticate devices. X.509 authentication allows authentication of an IoT device at the physical layer as part of the Transport Layer Security (TLS) standard connection establishment.
+
+An X.509 CA certificate is a digital certificate that can sign other certificates. A digital certificate is considered an X.509 certificate if it conforms to the certificate formatting standard prescribed by IETF's RFC 5280 standard. A certificate authority (CA) means that its holder can sign other certificates.
+
+This article describes how to use X.509 certificate authority (CA) certificates to authenticate devices connecting to IoT Hub, which includes the following steps:
+
+* How to get an X.509 CA certificate
+* How to register the X.509 CA certificate to IoT Hub
+* How to sign devices using X.509 CA certificates
+* How devices signed with X.509 CA are authenticated
++
+The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process and supply chain logistics during device manufacturing.
+
+## Authentication and authorization
+
+*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*.
+
+This article describes authentication using **X.509 certificates**. You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub.
+
+X.509 certificates are used for authentication in IoT Hub, not authorization. Unlike with Azure Active Directory and shared access signatures, you can't customize permissions with X.509 certificates.
+
+## Enforce X.509 authentication
+
+For additional security, an IoT hub can be configured to not allow SAS authentication for devices and modules, leaving X.509 as the only accepted authentication option. Currently, this feature isn't available in Azure portal. To configure, set `disableDeviceSAS` and `disableModuleSAS` to `true` on the IoT Hub resource properties:
+
+```azurecli
+az resource update -n <iothubName> -g <resourceGroupName> --resource-type Microsoft.Devices/IotHubs --set properties.disableDeviceSAS=true properties.disableModuleSAS=true
+```
+
+## Benefits of X.509 CA certificate authentication
+
+X.509 certificate authority (CA) authentication is an approach for authenticating devices to IoT Hub using a method that dramatically simplifies device identity creation and life-cycle management in the supply chain.
+
+A distinguishing attribute of X.509 CA authentication is the one-to-many relationship that a CA certificate has with its downstream devices. This relationship enables registration of any number of devices into IoT Hub by registering an X.509 CA certificate once. Otherwise, unique certificates would have to be pre-registered for every device before a device can connect. This one-to-many relationship also simplifies device certificates lifecycle management operations.
+
+Another important attribute of X.509 CA authentication is simplification of supply chain logistics. Secure authentication of devices requires that each device holds a unique secret like a key as the basis for trust. In certificate-based authentication, this secret is a private key. A typical device manufacturing flow involves multiple steps and custodians. Securely managing device private keys across multiple custodians and maintaining trust is difficult and expensive. Using certificate authorities solves this problem by signing each custodian into a cryptographic chain of trust rather than entrusting them with device private keys. Each custodian signs devices at their respective step of the manufacturing flow. The overall result is an optimal supply chain with built-in accountability through use of the cryptographic chain of trust.
+
+This process yields the most security when devices protect their unique private keys. To this end, we recommend using Hardware Secure Modules (HSM) capable of internally generating private keys.
+
+The Azure IoT Hub Device Provisioning Service (DPS) makes it easy to provision groups of devices to hubs. For more information, see [Tutorial: Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
+
+## Get an X.509 CA certificate
+
+The X.509 CA certificate is the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it.
+
+For production environments, we recommend that you purchase an X.509 CA certificate from a professional certificate services provider. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services.
+
+You may also create a self-signed X.509 CA certificate for testing purposes. For more information about creating certificates for testing, see [Create and upload certificates for testing](tutorial-x509-test-certs.md).
+
+>[!NOTE]
+>We do not recommend the use of self-signed certificates for production environments.
+
+Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication.
+
+## Sign devices into the certificate chain of trust
+
+The owner of an X.509 CA certificate can cryptographically sign an intermediate CA that can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device certificate. The result is a cascaded chain of certificates known as a *certificate chain of trust*. This delegation of trust is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys.
+
+![Diagram that shows the certificates in a chain of trust.](./media/generic-cert-chain-of-trust.png)
+
+The device certificate (also called a leaf certificate) must have its common name (CN) set to the **device ID** (`CN=deviceId`) that was used when registering the IoT device in Azure IoT Hub. This setting is required for authentication.
+
+For modules using X.509 authentication, the module's certificate must have its common name (CN) formatted like `CN=deviceId/moduleId`.
+
+Learn how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) as done when signing devices.
+
+## Register the X.509 CA certificate to IoT Hub
+
+Register your X.509 CA certificate to IoT Hub, which uses it to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession.
+
+The upload process entails uploading a file that contains your certificate. This file should never contain any private keys.
+
+The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. You can choose to either automatically or manually verify ownership. For manual verification, Azure IoT Hub generates a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step and manually verify your certificate by uploading a file containing the results.
+
+Learn how to [register your CA certificate](tutorial-x509-test-certs.md#register-your-subordinate-ca-certificate-to-your-iot-hub).
+
+## Authenticate devices signed with X.509 CA certificates
+
+Every IoT hub has an identity registry that stores information about the devices and modules permitted to connect to it. Before a device or module can connect, there must be an entry for that device or module in the IoT hub's identity registry. A device or module authenticates with the IoT hub based on credentials stored in the identity registry.
+
+With your X.509 CA certificate registered and devices signed into a certificate chain of trust, the final step is device authentication when the device connects. When an X.509 CA-signed device connects, it uploads its certificate chain for validation. The chain includes all intermediate CA and device certificates. With this information, IoT Hub authenticates the device in a two-step process. IoT Hub cryptographically validates the certificate chain for internal consistency, and then issues a proof-of-possession challenge to the device. IoT Hub declares the device authentic on a successful proof-of-possession response from the device. This declaration assumes that the device's private key is protected and that only the device can successfully respond to this challenge. We recommend using secure chips like Hardware Secure Modules (HSM) in devices to protect private keys.
+
+A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
+
+## Revoke a device certificate
+
+IoT Hub doesn't check certificate revocation lists from the certificate authority when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub).
+
+## Example scenario
+
+Company-X makes Smart-X-Widgets that are designed for professional installation. Company-X outsources both manufacturing and installation. Factory-Y manufactures the Smart-X-Widgets and Technician-Z installs them. Company-X wants the Smart-X-Widget shipped directly from Factory-Y to Technician-Z for installation and then for it to connect directly to Company-X's instance of IoT Hub. To make this happen, Company-X need to complete a few one-time setup operations to prime Smart-X-Widget for automatic connection. This end-to-end scenario includes the following steps:
+
+1. Acquire the X.509 CA certificate
+
+2. Register the X.509 CA certificate to IoT Hub
+
+3. Sign devices into a certificate chain of trust
+
+4. Connect the devices
+
+These steps are demonstrated in [Tutorial: Create and upload certificates for testing](./tutorial-x509-test-certs.md).
+
+### Acquire the certificate
+
+Company-X can either purchase an X.509 CA certificate from a public root certificate authority or create one through a self-signed process. Either option entails two basic steps: generating a public/private key pair and signing the public key into a certificate.
+
+Details on how to accomplish these steps differ with various service providers.
++
+#### Purchase a certificate
+
+Purchasing a CA certificate has the benefit of having a well-known root CA act as a trusted third party to vouch for the legitimacy of IoT devices when the devices connect. Choose this option if your devices interact with third-party products or services.
+
+To purchase an X.509 CA certificate, choose a root certificate service provider. The root CA provider will guide you on how to create the public/private key pair and how to generate a certificate signing request (CSR) for their services. A CSR is the formal process of applying for a certificate from a certificate authority. The outcome of this purchase is a certificate for use as an authority certificate. Given the ubiquity of X.509 certificates, the certificate is likely to have been properly formatted to IETF's RFC 5280 standard.
+
+#### Create a self-signed certificate
+
+The process to create a self-signed X.509 CA certificate is similar to purchasing one, except that it doesn't involve a third-party signer like the root certificate authority. In our example, Company-X would sign its authority certificate instead of a root certificate authority.
+
+You might choose this option for testing until you're ready to purchase an authority certificate. You could also use a self-signed X.509 CA certificate in production if your devices don't connect to any third-party services outside of IoT Hub.
+
+### Register the certificate to IoT Hub
+
+Company-X needs to register the X.509 CA to IoT Hub where it serves to authenticate Smart-X-Widgets as they connect. This one-time process enables the ability to authenticate and manage any number of Smart-X-Widget devices. The one-to-many relationship between CA certificate and device certificates is one of the main advantages of using the X.509 CA authentication method. The alternative would be to upload individual certificate thumbprints for each and every Smart-X-Widget device, thereby adding to operational costs.
+
+Registering the X.509 CA certificate is a two-step process: upload the certificate then provide proof-of-possession.
++
+#### Upload the certificate
+
+The X.509 CA certificate upload process is just that: uploading the CA certificate to IoT Hub. IoT Hub expects the certificate in a file.
+
+The certificate file must not under any circumstances contain any private keys. Best practices from standards governing Public Key Infrastructure (PKI) mandates that knowledge of Company-X's private key resides exclusively within Company-X.
+
+#### Prove possession
+
+The X.509 CA certificate, just like any digital certificate, is public information that is susceptible to eavesdropping. As such, an eavesdropper may intercept a certificate and try to upload it as their own. In our example, IoT Hub has to make sure that the CA certificate Company-X uploaded really belongs to Company-X. It does so by challenging Company-X to prove that they possess the certificate through a [proof-of-possession (PoP) flow](https://tools.ietf.org/html/rfc5280#section-3.1).
+
+For the proof-of-possession flow, IoT Hub generates a random number to be signed by Company-X using its private key. If Company-X followed PKI best practices and protected their private key, then only they would be able to correctly respond to the proof-of-possession challenge. IoT Hub proceeds to register the X.509 CA certificate upon a successful response of the proof-of-possession challenge.
+
+A successful response to the proof-of-possession challenge from IoT Hub completes the X.509 CA registration.
+
+### Sign devices into a certificate chain of trust
+
+IoT requires a unique identity for every device that connects. For certificate-based authentication, these identities are in the form of certificates. In our example, certificate-based authentication means that every Smart-X-Widget must possess a unique device certificate.
+
+A valid but inefficient way to provide unique certificates on each device is to pre-generate certificates for Smart-X-Widgets and to trust supply chain partners with the corresponding private keys. For Company-X, this means entrusting both Factory-Y and Technician-Z. This method comes with challenges that must be overcome to ensure trust, as follows:
+
+* Having to share device private keys with supply chain partners, besides ignoring PKI best practices of never sharing private keys, makes building trust in the supply chain expensive. It requires systems like secure rooms to house device private keys and processes like periodic security audits. Both add cost to the supply chain.
+
+* Securely accounting for devices in the supply chain, and later managing them in deployment, becomes a one-to-one task for every key-to-device pair from the point of device unique certificate (and private key) generation to device retirement. This precludes group management of devices unless the concept of groups is explicitly built into the process somehow. Secure accounting and device life-cycle management, therefore, becomes a heavy operations burden.
+
+X.509 CA certificate authentication offers elegant solutions to these challenges by using certificate chains. A certificate chain results from a CA signing an intermediate CA that in turn signs another intermediate CA, and so on, until a final intermediate CA signs a device. In our example, Company-X signs Factory-Y, which in turn signs Technician-Z that finally signs Smart-X-Widget.
++
+This cascade of certificates in the chain represents the logical hand-off of authority. Many supply chains follow this logical hand-off whereby each intermediate CA gets signed into the chain while receiving all upstream CA certificates, and the last intermediate CA finally signs each device and injects all the authority certificates from the chain into the device. This hand-off is common when the contracted manufacturing company with a hierarchy of factories commissions a particular factory to do the manufacturing. While the hierarchy may be several levels deep (for example, by geography/product type/manufacturing line), only the factory at the end gets to interact with the device but the chain is maintained from the top of the hierarchy.
+
+Alternate chains may have different intermediate CAs interact with the device in which case the CA interacting with the device injects certificate chain content at that point. Hybrid models are also possible where only some of the CA has physical interaction with the device.
+
+The following diagram shows how the certificate chain of trust comes together in our Smart-X-Widget example.
++
+1. Company-X never physically interacts with any of the Smart-X-Widgets. It initiates the certificate chain of trust by signing Factory-Y's intermediate CA certificate.
+1. Factory-Y now has its own intermediate CA certificate and a signature from Company-X. It passes copies of these items to the device. It also uses its intermediate CA certificate to sign Technician-Z's intermediate CA certificate and the Smart-X-Widget device certificate.
+1. Technician-Z now has its own intermediate CA certificate and a signature from Factory-Y. It passes copies of these items to the device. It also uses its intermediate CA certificate to sign the Smart-X-Widget device certificate.
+1. Every Smart-X-Widget device now has its own unique device certificate and copies of the public keys and signatures from each intermediate CA certificate that it interacted with throughout the supply chain. These certificates and signatures can be traced back to the original Company-X root.
+
+The CA method of authentication infuses secure accountability into the device manufacturing supply chain. Because of the certificate chain process, the actions of every member in the chain are cryptographically recorded and verifiable.
+
+This process relies on the assumption that the unique device public/private key pair is created independently and that the private key is protected within the device always. Fortunately, secure silicon chips exist in the form of Hardware Secure Modules (HSM) that are capable of internally generating keys and protecting private keys. Company-X only needs to add one such secure chip into Smart-X-Widget's component bill of materials.
+
+### Authenticate devices
+
+Once the top level CA certificate is registered to IoT Hub and the devices have their unique certificates, how do they connect? By registering an X.509 CA certificate to IoT Hub one time, how do potentially millions of devices connect and get authenticated from the first time? Through the same certificate upload and proof-of-possession flow that we earlier encountered with registering the X.509 CA certificate.
+
+Devices manufactured for X.509 CA authentication are equipped with unique device certificates and a certificate chain from their respective manufacturing supply chain. Device connection, even for the first time, happens in a two-step process: certificate chain upload and proof-of-possession.
+
+During the certificate chain upload, the device uploads its unique certificate and its certificate chain to IoT Hub. Using the pre-registered X.509 CA certificate, IoT Hub validates that the uploaded certificate chain is internally consistent and that the chain was originated by the valid owner of the X.509 CA certificate. As with the X.509 CA registration process, IoT Hub uses a proof-of-possession challenge-response process to ascertain that the chain, and therefore the device certificate, belongs to the device uploading it. A successful response triggers IoT Hub to accept the device as authentic and grant it connection.
+
+In our example, each Smart-X-Widget would upload its device unique certificate together with Factory-Y and Technician-Z X.509 CA certificates and then respond to the proof-of-possession challenge from IoT Hub.
++
+The foundation of trust rests in protecting private keys, including device private keys. We therefore can't stress enough the importance of secure silicon chips in the form of Hardware Secure Modules (HSM) for protecting device private keys, and the overall best practice of never sharing any private keys, like one factory entrusting another with its private key.
+
+## Next steps
+
+Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
+
+To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md).
+
+If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-security.md
- Title: Access control and security for IoT Hub
-description: Overview on how to control access to IoT Hub, includes links to depth articles on AAD integration and SAS options.
----- Previously updated : 04/15/2021---
-# Control access to IoT Hub
-
-This article describes the options for securing your IoT hub. IoT Hub uses *permissions* to grant access to each IoT hub endpoint. Permissions limit the access to an IoT hub based on functionality.
-
-There are three different ways for controlling access to IoT Hub:
--- **Azure Active Directory (Azure AD) integration** for service APIs. Azure provides identity-based authentication with AAD and fine-grained authorization with Azure role-based access control (Azure RBAC). Azure AD and RBAC integration is supported for IoT hub service APIs only. To learn more, see [Control access to IoT Hub using Azure Active Directory](iot-hub-dev-guide-azure-ad-rbac.md).-- **Shared access signatures** lets you group permissions and grant them to applications using access keys and signed security tokens. To learn more, see [Control access to IoT Hub using shared access signature](iot-hub-dev-guide-sas.md). -- **Per-device security credentials**. Each IoT Hub contains an [identity registry](iot-hub-devguide-identity-registry.md) For each device in this identity registry, you can configure security credentials that grant DeviceConnect permissions scoped to the that device's endpoints. To learn more, see [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub).--
-> [!Tip]
-> You can enable a lock on your IoT resources to prevent them being accidentally or maliciously deleted. To learn more about Azure Resource locks, please visit, [Lock your resources to protect your infrastructure](../azure-resource-manager/management/lock-resources.md?tabs=json)
-
-## Next steps
--- [Control access to IoT Hub using Azure Active Directory](iot-hub-dev-guide-azure-ad-rbac.md)-- [Control access to IoT Hub using shared access signature](iot-hub-dev-guide-sas.md)-- [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub)
iot-hub Iot Hub X509 Certificate Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509-certificate-concepts.md
- Title: Understand cryptography and X.509 certificates for Azure IoT Hub | Microsoft Docs
-description: Understand cryptography and X.509 PKI for Azure IoT Hub
---- Previously updated : 01/09/2023--
-#Customer intent: As a developer, I want to understand X.509 Public Key Infrastructure (PKI) and public key cryptography so I can use X.509 certificates to authenticate devices to an IoT hub.
--
-# Understand public key cryptography and X.509 public key infrastructure
-
-You can use X.509 certificates to authenticate devices to an Azure IoT hub. A certificate is a digital document that contains the device's public key and can be used to verify that the device is what it claims to be. X.509 certificates and certificate revocation lists (CRLs) are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). Certificates are just one part of an X.509 public key infrastructure (PKI). To understand X.509 PKI, you need to understand cryptographic algorithms, cryptographic keys, certificates, and certificate authorities (CAs):
-
-* **Algorithms** define how original plaintext data is transformed into ciphertext and back to plaintext.
-* **Keys** are random or pseudorandom data strings used as input to an algorithm.
-* **Certificates** are digital documents that contain an entity's public key and enable you to determine whether the subject of the certificate is who or what it claims to be.
-* **Certificate Authorities** attest to the authenticity of certificate subjects.
-
-You can purchase a certificate from a certificate authority (CA). You can also, for testing and development or if you're working in a self-contained environment, create a self-signed root CA. For example, if you want to test IoT Hub authentication on devices that you own, you can self-sign your root CA and use that to issue device certificates. You can also issue self-signed device certificates.
-
-Before discussing X.509 certificates in more detail and using them to authenticate devices to an IoT hub, here are the fundamental cryptography concepts on which certificates are based.
-
-## Cryptography
-
-Cryptography protects information and communications through *encryption* and *decryption*. Encryption is the process of translating plain text data (*plaintext*) into something that appears to be random and meaningless (*ciphertext*). Decryption is the process of converting ciphertext back to plaintext. Cryptography is concerned with the following objectives:
-
-* **Confidentiality**: The information can be understood by only the intended audience.
-* **Integrity**: The information can't be altered in storage or in transit.
-* **Non-repudiation**: The creator of information can't later deny that creation.
-* **Authentication**: The sender and receiver can confirm each other's identity.
-
-## Encryption
-
-The encryption process requires an algorithm and a key. The algorithm defines how data is transformed from plaintext into ciphertext and back to plaintext. A key is a random string of data used as input to the algorithm. All of the security of the process is contained in the key. Therefore, the key must be stored securely. The details of the most popular algorithms, however, are publicly available.
-
-There are two types of encryption. Symmetric encryption uses the same key for both encryption and decryption. Asymmetric encryption uses different but mathematically related keys to perform encryption and decryption.
-
-### Symmetric encryption
-
-Symmetric encryption uses the same key to encrypt plaintext into ciphertext and decrypt ciphertext back into plaintext. The necessary length of the key, expressed in number of bits, is determined by the algorithm. After the key is used to encrypt plaintext, the encrypted message is sent to the recipient who then decrypts the ciphertext. The symmetric key must be securely transmitted to the recipient. Sending the key is the greatest security risk when using a symmetric algorithm.
--
-### Asymmetric encryption
-
-If only symmetric encryption is used, the problem is that all parties to the communication must possess the private key. However, it's possible that unauthorized third parties can capture the key during transmission to authorized users. To address this issue, you can use asymmetric or public key cryptography instead.
-
-In asymmetric cryptography, every user has two mathematically related keys called a key pair. One key is public and the other key is private. The key pair ensures that only the recipient has access to the private key needed to decrypt the data. The following illustration summarizes the asymmetric encryption process.
--
-1. The recipient creates a public-private key pair and sends the public key to a CA. The CA packages the public key in an X.509 certificate.
-
-1. The sending party obtains the recipient's public key from the CA.
-
-1. The sender encrypts plaintext data using an encryption algorithm. The recipient's public key is used to perform encryption.
-
-1. The sender transmits the ciphertext to the recipient. It isn't necessary to send the key because the recipient already has the private key needed to decrypt the ciphertext.
-
-1. The recipient decrypts the ciphertext by using the specified asymmetric algorithm and the private key.
-
-### Combining symmetric and asymmetric encryption
-
-Symmetric and asymmetric encryption can be combined to take advantage of their relative strengths. Symmetric encryption is much faster than asymmetric encryption, but, because of the necessity of sending private keys to other parties, it isn't as secure. To combine the two types together, symmetric encryption can be used to convert plaintext to ciphertext. Asymmetric encryption is used to exchange the symmetric key. This process is demonstrated by the following diagram.
--
-1. The sender retrieves the recipient's public key.
-
-1. The sender generates a symmetric key and uses it to encrypt the original data.
-
-1. The sender uses the recipient's public key to encrypt the symmetric key.
-
-1. The sender transmits the encrypted symmetric key and the ciphertext to the intended recipient.
-
-1. The recipient uses the private key that matches the recipient's public key to decrypt the sender's symmetric key.
-
-1. The recipient uses the symmetric key to decrypt the ciphertext.
-
-### Asymmetric signing
-
-Asymmetric algorithms can be used to protect data from modification and prove the identity of the data creator. The following illustration shows how asymmetric signing helps prove the sender's identity.
--
-1. The sender passes plaintext data through an asymmetric encryption algorithm, using the private key for encryption. Notice that this scenario reverses use of the private and public keys outlined in the preceding section, [Asymmetric encryption](#asymmetric-encryption).
-
-1. The resulting ciphertext is sent to the recipient.
-
-1. The recipient obtains the originator's public key from a directory.
-
-1. The recipient decrypts the ciphertext by using the originator's public key. The resulting plaintext proves the originator's identity because only the originator has access to the private key that initially encrypted the original text.
-
-## Signing
-
-Digital signing can be used to determine whether the data has been modified in transit or at rest. The data is passed through a hash algorithm, a one-way function that produces a mathematical result from the given message. The result is called a *hash value*, *message digest*, *digest*, *signature*, *fingerprint*, or *thumbprint*. A hash value can't be reversed to obtain the original message. Because a small change in the message results in a significant change in the *thumbprint*, the hash value can be used to determine whether a message has been altered. The following illustration shows how asymmetric encryption and hash algorithms can be used to verify that a message hasn't been modified.
--
-1. The sender creates a plaintext message.
-
-1. The sender hashes the plaintext message to create a message digest.
-
-1. The sender encrypts the digest using a private key.
-
-1. The sender transmits the plaintext message and the encrypted digest to the intended recipient.
-
-1. The recipient decrypts the digest by using the sender's public key.
-
-1. The recipient runs the same hash algorithm that the sender used over the message.
-
-1. The recipient compares the resulting signature to the decrypted signature. If the digests are the same, the message wasn't modified during transmission.
-
-## Next steps
-
-To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md).
-
-If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles:
-
-* [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md)
-* If you want to use self-signed certificates for testing, see the [Create a self-signed certificate](reference-x509-certificates.md#create-a-self-signed-certificate) section of [X.509 certificates](reference-x509-certificates.md).
-
- >[!IMPORTANT]
- >We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
-
-If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
iot-hub Iot Hub X509ca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md
- Title: Overview of Azure IoT Hub X.509 CA security
-description: Overview - how to authenticate devices to IoT Hub using X.509 Certificate Authorities.
----- Previously updated : 07/14/2022----
-# Authenticate devices using X.509 CA certificates
-
-This article describes how to use X.509 certificate authority (CA) certificates to authenticate devices connecting to IoT Hub. In this article you will learn:
-
-* How to get an X.509 CA certificate
-* How to register the X.509 CA certificate to IoT Hub
-* How to sign devices using X.509 CA certificates
-* How devices signed with X.509 CA are authenticated
--
-The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process and supply chain logistics during device manufacturing. If you aren't familiar with X.509 CA certificates, see [Understand how X.509 CA certificates are used in the IoT industry](iot-hub-x509ca-concept.md) for more information.
-
-## Get an X.509 CA certificate
-
-The X.509 CA certificate is at the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it.
-
-For production environments, we recommend that you purchase an X.509 CA certificate from a professional certificate services provider. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services.
-
-You may also create a self-signed X.509 CA certificate for testing purposes. For more information about creating certificates for testing, see [Create and upload certificates for testing](tutorial-x509-test-certs.md).
-
->[!NOTE]
->We do not recommend the use of self-signed certificates for production environments.
-
-Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication.
-
-## Sign devices into the certificate chain of trust
-
-The owner of an X.509 CA certificate can cryptographically sign an intermediate CA that can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device certificate. The result is a cascaded chain of certificates known as a *certificate chain of trust*. In real life this plays out as delegation of trust towards signing devices. This delegation is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys.
-
-![Diagram that shows the certificates in a chain of trust.](./media/generic-cert-chain-of-trust.png)
-
-The device certificate (also called a leaf certificate) must have the *subject name* set to the **device ID** (`CN=deviceId`) that was used when registering the IoT device in Azure IoT Hub. This setting is required for authentication.
-
-Learn how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) as done when signing devices.
-
-## Register the X.509 CA certificate to IoT Hub
-
-Register your X.509 CA certificate to IoT Hub, which uses it to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession.
-
-The upload process entails uploading a file that contains your certificate. This file should never contain any private keys.
-
-The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. You can choose to either automatically or manually verify ownership. For manual verification, Azure IoT Hub generates a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step and manually verify your certificate by uploading a file containing the results.
-
-Learn how to [register your CA certificate](tutorial-x509-test-certs.md#register-your-subordinate-ca-certificate-to-your-iot-hub).
-
-## Create a device on IoT Hub
-
-To prevent device impersonation, IoT Hub requires that you let it know what devices to expect. You do this by creating a device entry in the IoT hub's device registry. This process is automated when using [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md).
-
-Learn how to [manually create a device in IoT Hub](./iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-
-## Authenticate devices signed with X.509 CA certificates
-
-With your X.509 CA certificate registered and devices signed into a certificate chain of trust, the final step is device authentication when the device connects. When an X.509 CA-signed device connects, it uploads its certificate chain for validation. The chain includes all intermediate CA and device certificates. With this information, IoT Hub authenticates the device in a two-step process. IoT Hub cryptographically validates the certificate chain for internal consistency, and then issues a proof-of-possession challenge to the device. IoT Hub declares the device authentic on a successful proof-of-possession response from the device. This declaration assumes that the device's private key is protected and that only the device can successfully respond to this challenge. We recommend using secure chips like Hardware Secure Modules (HSM) in devices to protect private keys.
-
-A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
-
-## Revoke a device certificate
-
-IoT Hub doesn't check certificate revocation lists from the certificate authority when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub).
-
-## Next Steps
-
-Learn about [the value of X.509 CA authentication](iot-hub-x509ca-concept.md) in IoT.
-
-Get started with [IoT Hub Device Provisioning Service](../iot-dps/index.yml).
iot Howto Use Iot Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-use-iot-explorer.md
Title: Install and use Azure IoT explorer | Microsoft Docs
description: Install the Azure IoT explorer tool and use it to interact with IoT Plug and Play devices connected to IoT hub. Although this article focuses on working with IoT Plug and Play devices, you can use the tool with any device connected to your hub. Previously updated : 06/14/2022 Last updated : 09/29/2023
On the **Component** page, you can view the read-only properties, update writabl
You can view the read-only properties defined in an interface on the **Properties (read-only)** tab. You can update the writable properties defined in an interface on the **Properties (writable)** tab: 1. Go to the **Properties (writable)** tab.
-1. Click the property you'd like to update.
+1. Select the property you'd like to update.
1. Enter the new value for the property. 1. Preview the payload to be sent to the device. 1. Submit the change.
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
To create a Linux VM using the Azure CLI, use the [az vm create](/cli/azure/vm)
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
To create a Linux VM using the Azure CLI, use the [az vm create](/cli/azure/vm)
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
lab-services Class Type Arcgis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md
The steps in this section show how to set up the template VM:
3. Set up external backup storage for students. Students can save files directly to their assigned VM since all changes that they make are saved across sessions. However, we recommend that students back up their work to storage that is external from their VM for a few reasons: - To enable students to access their work after the class and lab ends.
- - In case the student gets their VM into a bad state and their image needs to be [reset](how-to-manage-vm-pool.md#reset-lab-vms).
+ - In case the student gets their VM into a bad state and their image needs to be [reimaged](how-to-manage-vm-pool.md#reimage-lab-vms).
With ArcGIS, each student should back up the following files at the end of each work session:
lab-services Classroom Labs Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md
The following table shows the corresponding mapping of organization roles to Azu
| Org. role | Azure AD role | Description | | | | | | Administrator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. |
-| | Lab Operator | Optionally, assign to other administrator to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
| Educator | Lab Creator | Create and manage the labs they created. | | | Lab Contributor | Optionally, assign to an educator to create and manage all labs (when assigned at the resource group level). |
-| | Lab Operator | Optionally, assign to other educators to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
-| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |
+| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reimage/start/stop/connect lab VMs. |
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
The following table shows the corresponding mapping of organization roles to Azu
| Org. role | Azure AD role | Description | | | | | | Administrator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. |
-| | Lab Operator | Optionally, assign to other administrator to manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
-| Educator | Lab Operator | Manage lab users & schedules, publish labs, and reset/start/stop/connect lab VMs. |
-| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |
+| Educator | - Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reimage/start/stop/connect lab VMs. |
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
The following table shows the corresponding mapping of organization roles to Azu
| Org. role | Azure AD role | Description | | | | | | Educator | - Subscription Owner<br/>- Subscription Contributor | Create lab plan in Azure portal. As an Owner, you can also fully manage all labs. |
-| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reset/start/stop/connect lab VMs. |
+| | Lab Assistant | Optionally, assign to other educators to help support lab students by allowing reimage/start/stop/connect lab VMs. |
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-manage-lab-users.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
lab-services Concept Lab Accounts Versus Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-accounts-versus-lab-plans.md
+
+ Title: Lab accounts versus lab plans
+
+description: Learn about the differences between lab accounts and lab plans in Azure Lab Services. Lab plans replace lab accounts and have some fundamental differences.
+++++ Last updated : 08/07/2023++
+# Lab accounts versus lab plans in Azure Lab Services
+
+In Azure Lab Services, lab plans replace lab accounts and there some fundamental differences between the two concepts. In this article, you get an overview of the changes that come with lab plans and how lab plans are different from lab accounts. Lab plans bring improvements in performance, reliability, and scalability. Lab plans also give you more flexibility for managing labs, using capacity, and tracking costs.
++
+## Overview
+
+Lab plans replace lab accounts and although they come with key new features, they share many familiar concepts. Lab plans, similar to lab accounts, serve as the collection of configurations and settings for creating labs. For example, to configure image galleries, shutdown settings, management of lab users, or to specify advanced networking settings.
+
+Lab plans also have fundamental differences. For example, labs created with lab plans are now an Azure resource in their own right, which makes them a sibling resource to lab plans.
+
+By using lab plans, you can unlock several new capabilities:
+
+**[Canvas Integration](how-to-configure-canvas-for-lab-plans.md)**. If your organization is using Canvas, educators no longer have to leave Canvas to create labs with Azure Lab Services. Students can connect to their virtual machine from inside their course in Canvas.
+
+**[Per-customer assigned capacity](capacity-limits.md#per-customer-assigned-capacity)**. You don't have to share capacity with others anymore. If your organization has requested more quota, Azure Lab Services allocates it just for you.
+
+**[Advanced networking](how-to-connect-vnet-injection.md)**. Advanced networking with virtual network injection replaces virtual network peering. In your Azure subscription, you can create a virtual network in the same region as the lab plan, and delegate a subnet to Azure Lab Services.
+
+**[Improved auto-shutdown](how-to-configure-auto-shutdown-lab-plans.md)**. Auto-shutdown settings are now available for Windows and Linux operating systems. Learn more about the [supported Linux distributions](./how-to-enable-shutdown-disconnect.md#supported-linux-distributions-for-automatic-shutdown).
+
+**[More built-in roles](./concept-lab-services-role-based-access-control.md)**. In addition to the Lab Creator built-in role, there are now more lab management roles, such as Lab Assistant. Learn more about [role-based access control in Azure Lab Services](./concept-lab-services-role-based-access-control.md).
+
+**[Improved cost tracking in Microsoft Cost Management](cost-management-guide.md#separate-the-costs)**. Lab virtual machines are now the cost unit tracked in Microsoft Cost Management. Tags for lab plan ID and lab name are automatically added to each cost entry. If you want to track the cost of a single lab, group the lab VM cost entries together by the lab name tag. Custom tags on labs also propagate to Microsoft Cost Management entries to allow further cost analysis.
+
+**[Updates to lab owner experience](how-to-manage-labs.md)**. Choose to skip the template creation process when creating a new lab if you already have an image ready to use. In addition, you can add a non-admin user to lab VMs.
+
+**[Updates to lab user experience](how-to-manage-vm-pool.md#redeploy-lab-vms)**. In addition to reimaging their lab VM, lab users can now also redeploy their lab VM without losing the data inside the lab VM. In addition, the lab registration experience is simplified when you use labs in Teams, Canvas, or with Azure AD groups. In these cases, Azure Lab Services *automatically* assigns a lab VM to a lab user.
+
+**SDKs**. Azure Lab Services is now integrated with the [Az PowerShell module](/powershell/azure/release-notes-azureps) and supports Azure Resource Manager (ARM) templates. Also, you can use either the [.NET SDK](/dotnet/api/overview/azure/labservices) or [Python SDK](https://pypi.org/project/azure-mgmt-labservices/).
+
+## Difference between lab plans and lab accounts
+
+Lab plans replace lab accounts in Azure Lab Services. The following table lists the fundamental differences between lab plans and lab accounts:
+
+|Lab account|Lab plan|
+|-|-|
+|Lab account was the only resource that administrators could interact with inside the Azure portal.|Administrators can now manage two types of resources, lab plan and lab, in the Azure portal.|
+|Lab account served as the **parent** for the labs.|Lab plan is a **sibling** resource to the lab resource. Grouping of labs is now done by the resource group.|
+|Lab account served as a container for the labs. A change to the lab account often affected the labs under it.|The lab plan serves as a collection of configurations and settings that are applied when a lab is **created**. If you change a lab plan's settings, these changes won't affect any existing labs that were previously created from the lab plan. (The exception is the internal help information, which will affect all labs.)|
+
+Lab accounts and labs have a parental relationship. Moving to a sibling relationship between the lab plan and lab provides an upgraded experience. The following table compares the previous experience with a lab account and the new improved experience with a lab plan.
+
+|Feature/area|Lab account|Lab plan|
+|-|-|-|
+|Resource Management|Lab account was the only resource tracked in the Azure portal. All other resources were child resources of the lab account and tracked in Lab Services directly.|Lab plans and labs are now sibling resources in Azure. Administrators can use existing tools in the Azure portal to manage labs. Virtual machines will continue to be a child resource of labs.|
+|Cost tracking|In Microsoft Cost Management, admins could only track and analyze cost at the service level and at the lab account level.| Cost entries in Microsoft Cost Management are now for lab virtual machines. Automatic tags on each entry specify the lab plan ID and the lab name. You can analyze cost by lab plan, lab, or virtual machine from within the Azure portal. Custom tags on the lab will also show in the cost data.|
+|Selecting regions|By default, labs were created in the same geography as the lab account. A geography typically aligns with a country/region and contains one or more Azure regions. Lab owners weren't able to manage exactly which Azure region the labs resided in.|In the lab plan, administrators now can manage the exact Azure regions allowed for lab creation. By default, labs will be created in the same Azure region as the lab plan. </br> Note, when a lab plan has advanced networking enabled, labs are created in the same Azure region as virtual network.|
+|Deletion experience|When a lab account is deleted, all labs within it are also deleted.|When deleting a lab plan, labs *aren't* deleted. After a lab plan is deleted, labs will keep references to their virtual network even if advanced networking is enabled. However, if a lab plan was connected to an Azure Compute Gallery, the labs can no longer export an image to that Azure Compute Gallery.|
+|Connecting to a virtual network|The lab account provided an option to peer to a virtual network. If you already had labs in the lab account before you peered to a virtual network, the virtual network connection didn't apply to existing labs. Admins couldn't tell which labs in the lab account were peered to the virtual network.|In a lab plan, admins set up the advanced networking only at the time of lab plan creation. Once a lab plan is created, you'll see a read-only connection to the virtual network. If you need to use another virtual network, create a new lab plan configured with the new virtual network.|
+|Labs portal experience|Labs are listed under lab accounts in [https://labs.azure.com](https://labs.azure.com).|Labs are listed under resource group name in [https://labs.azure.com](https://labs.azure.com). If there are multiple lab plans in the same resource group, educators can choose which lab plan to use when creating the lab. <br/>Learn more about [resource group and lab plan structure](./concept-lab-services-role-based-access-control.md#resource-group-and-lab-plan-structure).|
+|Permissions needed to manage labs|To create a lab:</br>- **Lab Contributor** role on the lab account.<br/></br>To modify an existing lab:</br>- **Reader** role on the lab account.</br>- **Owner** or **Contributor** role on the lab (Lab creators are assigned the **Owner** role to any labs they create). | To create a lab:</br>- **Owner** or **Contributor** role on the resource group that contains the lab plan.</br>- **Lab Creator** role on the lab plan.</br><br/>To modify an existing lab:</br>- **Owner** or **Contributor** role on the lab (Lab creators are assigned the **Owner** role to any labs they create).<br/><br/>Learn more about [Azure Lab Services role-based access control](./concept-lab-services-role-based-access-control.md). |
+
+## Known issues
+
+- When using virtual network injection, use caution in making changes to the virtual network, subnet, and resources created by Lab Services attached to the subnet. Also, labs using advanced networking must be deleted before deleting the virtual network.
+
+- Moving lab plan and lab resources from one Azure region to another isn't supported.
+
+- You have to register the [Azure Compute resource provider](../azure-resource-manager/management/resource-providers-and-types.md) before Azure Lab Services can [create and attach an Azure Compute Gallery resource](how-to-attach-detach-shared-image-gallery.md#attach-an-existing-compute-gallery-to-a-lab-plan).
+
+- If you're attaching an Azure compute gallery, the compute gallery and the lab plan must be in the same Azure region. Also, it's recommended that the [enabled regions](./create-and-configure-labs-admin.md#enable-regions) only has this Azure region selected.
+
+## Next steps
+
+If you're using lab accounts, follow these steps to [migrate your lab accounts to lab plans](./migrate-to-2022-update.md).
+
+If you're new to Azure Lab Services, get started by [creating a new lab plan](./quick-create-resources.md).
lab-services Concept Lab Services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-role-based-access-control.md
The following are the built-in roles supported by Azure Lab
| Administrator | Lab Services Contributor | Grant the same permissions as the Owner role, except for assigning roles. Learn more about the [Lab Services Contributor role](#lab-services-contributor-role). | | Lab management | Lab Creator | Grant permission to create labs and have full control over the labs that they create. Learn more about the [Lab Creator role](#lab-creator-role). | | Lab management | Lab Contributor | Grant permission to help manage an existing lab, but not create new labs. Learn more about the [Lab Contributor role](#lab-contributor-role). |
-| Lab management | Lab Assistant | Grant permission to view an existing lab. Can also start, stop, or reset any VM in the lab. Learn more about the [Lab Assistant role](#lab-assistant-role). |
+| Lab management | Lab Assistant | Grant permission to view an existing lab. Can also start, stop, or reimage any VM in the lab. Learn more about the [Lab Assistant role](#lab-assistant-role). |
| Lab management | Lab Services Reader | Grant permission to view existing labs. Learn more about the [Lab Services Reader role](#lab-services-reader-role). | ## Role assignment scope
The following table shows common lab activities and the role that's needed for a
| Grant permission to create or manage your own labs for *all* lab plans within a resource group. | Lab management | [Lab Creator](#lab-creator-role) | Resource group | | Grant permission to create or manage your own labs for a specific lab plan. | Lab management | [Lab Creator](#lab-creator-role) | Lab plan | | Grant permission to co-manage a lab, but *not* the ability to create labs. | Lab management | [Lab Contributor](#lab-contributor-role) | Lab |
-| Grant permission to only start/stop/reset VMs for *all* labs within a resource group. | Lab management | [Lab Assistant](#lab-assistant-role) | Resource group |
-| Grant permission to only start/stop/reset VMs for a specific lab. | Lab management | [Lab Assistant](#lab-assistant-role) | Lab |
+| Grant permission to only start/stop/reimage VMs for *all* labs within a resource group. | Lab management | [Lab Assistant](#lab-assistant-role) | Resource group |
+| Grant permission to only start/stop/reimage VMs for a specific lab. | Lab management | [Lab Assistant](#lab-assistant-role) | Lab |
> [!IMPORTANT] > An organizationΓÇÖs subscription is used to manage billing and security for all Azure resources and services. You can assign the Owner or Contributor role on the [subscription](./administrator-guide.md#subscription). Typically, only administrators have subscription-level access because this includes full access to all resources in the subscription.
When you assign the Lab Contributor role on the lab, the user can manage the ass
### Lab Assistant role
-Assign the Lab Assistant role to grant a user permission to view a lab, and start, stop, and reset lab virtual machines for the lab.
+Assign the Lab Assistant role to grant a user permission to view a lab, and start, stop, and reimage lab virtual machines for the lab.
Assign the Lab Assistant role on the *resource group or lab*.
Assign the Lab Assistant role on the *resource group or lab*.
When you assign the Lab Assistant role on the resource group, the user: -- Can view all labs within the resource group and start, stop, or reset lab virtual machines for each lab.
+- Can view all labs within the resource group and start, stop, or reimage lab virtual machines for each lab.
- CanΓÇÖt delete or make any other changes to the labs. When you assign the Lab Assistant role on the lab, the user: -- Can view the assigned lab and start, stop, or reset lab virtual machines.
+- Can view the assigned lab and start, stop, or reimage lab virtual machines.
- CanΓÇÖt delete or make any other changes to the lab. - CanΓÇÖt create new labs.
lab-services Concept Lab Services Supported Networking Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-supported-networking-scenarios.md
The following table lists common networking scenarios and topologies and their s
| Enable distant license server, such as on-premises, cross-region | Yes | Add a [user defined route (UDR)](/azure/virtual-network/virtual-networks-udr-overview) that points to the license server.<br/><br/>If the lab software requires connecting to the license server by its name instead of the IP address, you need to [configure a customer-provided DNS server](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances?tabs=redhat#name-resolution-that-uses-your-own-dns-server) or add an entry to the `hosts` file in the lab template.<br/><br/>If multiple services need access to the license server, using them from multiple regions, or if the license server is part of other infrastructure, you can use the [hub-and-spoke Azure networking best practice](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). | | Access to on-premises resources, such as a license server | Yes | You can access on-premises resources with these options: <br/>- Configure [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) or create a [site-to-site VPN connection](/azure/vpn-gateway/tutorial-site-to-site-portal) (bridge the networks).<br/>- Add a public IP to your on-premises server with a firewall that only allows incoming connections from Azure Lab Services.<br/><br/>In addition, to reach the on-premises resources from the lab VMs, add a [user defined route (UDR)](/azure/virtual-network/virtual-networks-udr-overview). | | Use a [hub-and-spoke networking model](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology) | Yes | This scenario works as expected with lab plans and advanced networking. <br/><br/>A number of configuration changes aren't supported with Azure Lab Services, such as adding a default route on a route table. Learn about the [unsupported virtual network configuration changes](./how-to-connect-vnet-injection.md#4-optional-update-the-networking-configuration-settings). |
-| Access lab VMs by private IP address (private-only labs) | Not recommended | This scenario is functional, but makes it difficult for lab users to connect to their lab VM. In the Azure Lab Services website, lab users can't identify the private IP address of their lab VM. In addition, the connect button points to the public endpoint of the lab VM. The lab creator needs to provide lab users with the private IP address of their lab VMs. After a VM reset, this private IP address might change.<br/><br/>If you implement this scenario, don't delete the public IP address or load balancer associated with the lab. If those resources are deleted, the lab fails to scale or publish. |
+| Access lab VMs by private IP address (private-only labs) | Not recommended | This scenario is functional, but makes it difficult for lab users to connect to their lab VM. In the Azure Lab Services website, lab users can't identify the private IP address of their lab VM. In addition, the connect button points to the public endpoint of the lab VM. The lab creator needs to provide lab users with the private IP address of their lab VMs. After a VM reimage, this private IP address might change.<br/><br/>If you implement this scenario, don't delete the public IP address or load balancer associated with the lab. If those resources are deleted, the lab fails to scale or publish. |
| Protect on-premises resources with a firewall | Yes | Putting a firewall between the lab VMs and a specific resource is supported. | | Put lab VMs behind a firewall. For example for content filtering, security, and more. | No | The typical firewall setup doesn't work with Azure Lab Services, unless when connecting to lab VMs by private IP address (see previous scenario).<br/><br/>When you set up the firewall, a default route is added on the route table for the subnet. This default route introduces an asymmetric routing problem, which breaks the RDP/SSH connections to the lab. | | Use third party over-the-shoulder monitoring software | Yes | This scenario is supported with advanced networking for lab plans. |
lab-services How To Access Lab Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-access-lab-virtual-machine.md
In addition, you can also perform specific actions on the lab VM:
- Start or stop the lab VM: learn more about [starting and stopping a lab VM](#start-or-stop-the-lab-vm). - Connect to the lab VM: select the computer icon to connect to the lab VM with remote desktop or SSH. Learn more about [connecting to the lab VM](./connect-virtual-machine.md).-- Reset or troubleshoot the lab VM: learn more how you [reset or troubleshoot the lab VM](./how-to-reset-and-redeploy-vm.md) when you experience problems.
+- Redeploy or reimage the lab VM: learn more how you [redeploy or reimage the lab VM](./how-to-reset-and-redeploy-vm.md) when you experience problems.
## View quota hours
Learn more about how to [connect to a lab VM](connect-virtual-machine.md).
## Next steps - Learn how to [change your lab VM password](./how-to-set-virtual-machine-passwords-student.md)-- Learn how to [reset or troubleshoot your lab VM](./how-to-reset-and-redeploy-vm.md)
+- Learn how to [redeploy or reimage your lab VM](./how-to-reset-and-redeploy-vm.md)
- Learn about [key concepts in Azure Lab Services](./classroom-labs-concepts.md), such as quota hours or lab schedules.
lab-services How To Manage Lab Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-users.md
To view the list of lab users that have already registered for the lab by using
The list shows the list of lab users with their registration status. The user status should show **Registered**, and their name should also be available after registration. > [!NOTE]
- > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reset VMs](how-to-manage-vm-pool.md#reset-lab-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image.
+ > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [Reimage VMs](how-to-manage-vm-pool.md#reimage-lab-vms), the users remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image.
# [Azure AD group](#tab/aad)
lab-services How To Manage Vm Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool.md
Last updated 07/04/2023
# Manage a lab virtual machine pool in Azure Lab Services
-The lab virtual machine pool represents the set of lab virtual machines (VMs) that are available for lab users to connect to. The lab VM creation starts when you publish a lab template, or when you update the lab capacity. Learn how to change the capacity of the lab and modify the number of lab virtual machines, or manage the state of individual lab VMs.
+Learn how you can manage the pool of lab virtual machines (VMs) in Azure Lab Services. Change the capacity of the lab to add or remove lab VMs, connect to a lab, or manage the state of individual lab VMs.
+
+The lab virtual machine pool represents the set of lab VMs that are available for lab users to connect to. The lab VM creation starts when you publish a lab template, or when you update the lab capacity.
When you synchronize the lab user list with an Azure AD group, or create a lab in Teams or Canvas, Azure Lab Services manages the lab VM pool automatically based on membership.
-When you manage a lab VM pool, you can:
+## Prerequisites
+ -- Start and stop all or selected lab VMs.-- Reset a VM-- Connect to a lab user's VM.-- Change the lab capacity.
+## Lab VM states
-Lab VMs can be in one of a few states.
+A lab VM can be in one of the following states:
- **Unassigned**. The lab VM is not assigned to a lab user yet. The lab VM doesn't automatically start with the lab schedule. - **Stopped**. The lab VM is turned off and not available for use.
Lab VMs can be in one of a few states.
- **Running**. The lab VM is running and is available for use. - **Stopping**. The lab VM is stopping and not available for use.
-> [!WARNING]
-> When you start a lab VM, it doesn't affect the available [quota hours](./classroom-labs-concepts.md#quota) for the lab user. Make sure to stop all lab VMs manually or use a [schedule](how-to-create-schedules.md) to avoid unexpected costs.
-
-## Prerequisites
-- ## Change lab capacity When you synchronize the lab user list with an Azure AD group, or create a lab in Teams or Canvas, Azure Lab Services manages the lab VM pool automatically based on membership. When you add or remove a user, the lab capacity increases or decreases accordingly. Lab users are also automatically registered and assigned to their lab VM.
To manually start all lab VMs:
1. Select the **Start all** button at the top of the page.
- :::image type="content" source="./media/how-to-set-virtual-machine-passwords/start-all-vms-button.png" alt-text="Screenshot that shows the Virtual machine pool page and the Start all button is highlighted.":::
+ :::image type="content" source="./media/how-to-manage-vm-pool/start-all-vms-button.png" alt-text="Screenshot that shows the Virtual machine pool page and the Start all button is highlighted.":::
To start individual lab VMs:
To start individual lab VMs:
1. Alternately, select multiple VMs using the checks to the left of the **Name** column, and then select the **Start** button at the top of the page.
+> [!NOTE]
+> When you start a lab VM *from the virtual machine pool page*, it doesn't affect the available [quota hours](./classroom-labs-concepts.md#quota) for the lab user. Make sure to stop all lab VMs manually or use a [schedule](how-to-create-schedules.md) to avoid unexpected costs.
+ ## Manually stop lab VMs To manually stop all lab VMs:
To manually stop all lab VMs:
1. Select the **Stop all** button to stop all of the lab VMs.
- :::image type="content" source="./media/how-to-set-virtual-machine-passwords/stop-all-vms-button.png" alt-text="Screenshot that shows the Virtual machine pool page and the Stop all button is highlighted.":::
+ :::image type="content" source="./media/how-to-manage-vm-pool/stop-all-vms-button.png" alt-text="Screenshot that shows the Virtual machine pool page and the Stop all button is highlighted.":::
To start individual lab VMs:
To start individual lab VMs:
1. Alternately, select multiple VMs using the checks to the left of the **Name** column, and then select the **Stop** button at the top of the page.
-## Reset lab VMs
+## Reimage lab VMs
-When you reset a lab VM, Azure Lab Services shuts down the lab VM, deletes it, and recreates a new lab VM from the original template VM. You can think of a reset as a refresh of the entire lab VM.
+When you reimage a lab VM, Azure Lab Services shuts down the lab VM, deletes it, and recreates a new lab VM from the original lab template. You can think of a reimage operation as a refresh of the entire VM.
> [!CAUTION]
-> After you reset a lab VM, all the data that's saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. Learn how to [store the user data outside the lab VM](/azure/lab-services/troubleshoot-access-lab-vm#store-user-data-outside-the-lab-vm).
+> After you reimage a lab VM, all the data that you saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. Learn how you can [store the user data outside the lab VM](./troubleshoot-access-lab-vm.md#store-user-data-outside-the-lab-vm).
-To reset one or more lab VMs:
+To reimage one or more lab VMs:
1. Go to the **Virtual machine pool** page for the lab.
-1. Select **Reset** in the toolbar.
+1. Select one or multiple VMs from the list, and then select **Reimage** in the toolbar.
+
+ :::image type="content" source="./media/how-to-manage-vm-pool/reset-vm-button.png" alt-text="Screenshot of virtual machine pool. Reimage button is highlighted.":::
- :::image type="content" source="./media/how-to-set-virtual-machine-passwords/reset-vm-button.png" alt-text="Screenshot of virtual machine pool. Reset button is highlighted.":::
+1. On the **Reimage virtual machine** dialog box, and then select **Reimage** to start the operation.
-1. On the **Reset virtual machine(s)** dialog box, select **Reset**.
+ After the reimage operation finishes, the lab VMs are recreated from the lab template, and assigned to the lab users.
- :::image type="content" source="./media/how-to-set-virtual-machine-passwords/reset-vms-dialog.png" alt-text="Screenshot of reset virtual machine confirmation dialog.":::
+## Redeploy lab VMs
+
+When you redeploy a lab VM, Azure Lab Services shuts down the lab VM, moves the lab VM to a new node in the Azure infrastructure, and then powers it back on. You can think of a redeploy operation as a refresh of the underlying VM for your lab.
+
+All data that you saved in the [OS disk](/azure/virtual-machines/managed-disks-overview#os-disk) (usually the C: drive on Windows) of the VM is still available after the redeploy operation. Any data on the [temporary disk](/azure/virtual-machines/managed-disks-overview#temporary-disk) (usually the D: drive on Windows) is lost after a redeploy operation.
+
+To redeploy one or more lab VMs:
+
+1. Go to the **Virtual machine pool** page for the lab.
-### Redeploy lab VMs
+1. Select one or multiple VMs from the list, and then select **Redeploy** in the toolbar.
-When you use [lab plans](./lab-services-whats-new.md), lab users can now redeploy their lab VM. This operation is labeled **Troubleshoot** in Azure Lab Services. When you redeploy a lab VM, Azure Lab Services will shut down the VM, move the VM to a new node within the Azure infrastructure, and then power it back on.
+ :::image type="content" source="./media/how-to-manage-vm-pool/redeploy-vm-button.png" alt-text="Screenshot that shows the virtual machine pool in the Lab Services web portal, highlighting the Redeploy button.":::
-Learn how [lab users can redeploy their lab VM](./how-to-reset-and-redeploy-vm.md#redeploy-vms).
+1. On the **Redeploy virtual machine** dialog box, select **Redeploy** to start the redeployment.
## Connect to lab VMs
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
This article describes best practices and tips for preparing a Windows-based lab
## Install and configure OneDrive
-When a lab user resets a lab virtual machine, all data on the machine is removed. To protect user data from being lost, we recommend that lab users back up their data in the cloud, for example by using Microsoft OneDrive.
+When a lab user reimages a lab virtual machine, all data on the machine is removed. To protect user data from being lost, we recommend that lab users back up their data in the cloud, for example by using Microsoft OneDrive.
### Install OneDrive
lab-services How To Reset And Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-reset-and-redeploy-vm.md
Title: Troubleshoot a VM in Azure Lab Services
-description: Learn how to troubleshoot a VM in Azure Lab Services by redeploying or resetting the VM.
+ Title: Troubleshoot a lab VM
+description: Learn how you can troubleshoot your lab VM in Azure Lab Services by redeploying the VM to another hardware node, or by reimaging the lab VM to its initial state.
Previously updated : 12/06/2022 Last updated : 09/28/2023 <!-- As a student, I want to be able to troubleshoot connectivity problems with my VM so that I can get back up and running quickly, without having to escalate an issue -->
-# Troubleshoot a lab by redeploying or resetting the VM
+# Troubleshoot a lab VM with redeploy or reimage
-On rare occasions, you may have problems connecting to a VM in one of your labs. In this article, you learn how to redeploy or reset a lab VM in Azure Lab Services. You can use these troubleshooting steps to resolve connectivity issues for your assigned labs, without support from an educator or admin.
+In this article, you learn how to troubleshoot problems with connecting to your lab virtual machine (VM) in Azure Lab Services. As a lab user, you can perform troubleshooting operations on the lab VM, without support from the lab creator or an administrator.
-Learn more about [strategies for troubleshooting lab VMs](./troubleshoot-access-lab-vm.md).
+You can perform the following troubleshooting operations on the lab VM:
-## Reset VMs
+- **Redeploy a lab VM**: Azure Lab Services moves the VM to a new node in the Azure infrastructure, and then powers it back on. All data on the OS disk is still available after a redeploy operation.
-When you reset a lab VM, Azure Lab Services will shut down the VM, delete it, and recreate a new lab VM from the original template VM. You can think of a reset as a refresh of the entire VM.
+- **Reimage a lab VM**: Azure Lab Services recreates a new lab VM from the original template. All data in the lab VM is lost.
-You can reset a lab VM that is assigned to you. If you have the Lab Assistant, Lab Contributor, or Lab Operator role, you can reset any lab VM for which you have permissions.
+## Redeploy a lab VM
-You can also reset a lab VM by using the [REST api](/rest/api/labservices/virtual-machines/reimage), [PowerShell](/powershell/module/az.labservices/update-azlabservicesvmreimage), or the [.NET SDK](/dotnet/api/azure.resourcemanager.labservices.labvirtualmachineresource.reimage).
+When you redeploy a lab VM, Azure Lab Services shuts down the lab VM, moves the lab VM to a new node in the Azure infrastructure, and then powers it back on. You can think of a redeploy operation as a refresh of the underlying VM for your lab.
-> [!WARNING]
-> After you reset a lab VM, all the data that you saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. Learn how you can [store the user data outside the lab VM](./troubleshoot-access-lab-vm.md#store-user-data-outside-the-lab-vm).
+All data that you saved in the [OS disk](/azure/virtual-machines/managed-disks-overview#os-disk) (usually the C: drive on Windows) of the VM is still available after the redeploy operation. Any data on the [temporary disk](/azure/virtual-machines/managed-disks-overview#temporary-disk) (usually the D: drive on Windows) is lost after a redeploy operation.
+
+You can redeploy a lab VM that is assigned to you. If you have the Lab Assistant, or Lab Contributor role, you can redeploy any lab VM for which you have permissions.
-To reset a lab VM in the Azure Lab Services website that's assigned to you:
+To redeploy a lab VM that's assigned to you:
1. Go to the [Azure Lab Services website](https://labs.azure.com/virtualmachines) to view your virtual machines.
-1. For a specific lab VM, select **...** > **Reset**.
+1. For a specific lab VM, select **...** > **Redeploy**.
- :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-single-vm.png" alt-text="Screenshot that shows how to reset a lab VM in the Lab Services web portal, highlighting the Reset button.":::
+ :::image type="content" source="./media/how-to-reset-and-redeploy-vm/redeploy-single-vm.png" alt-text="Screenshot that shows the Redeploy virtual machine menu option in the Lab Services web portal.":::
-1. On the **Reset virtual machine** dialog box, select **Reset**.
+1. On the **Redeploy virtual machine** dialog box, select **Redeploy** to start the redeployment.
- :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-single-vm-confirmation.png" alt-text="Screenshot that shows the confirmation dialog for resetting a single VM in the Lab Services web portal.":::
+ :::image type="content" source="./media/how-to-reset-and-redeploy-vm/redeploy-single-vm-confirmation.png" alt-text="Screenshot that shows the confirmation dialog for redeploying a single VM in the Lab Services web portal.":::
-Alternatively, if you have permissions across multiple labs, you can reset multiple VMs for a lab:
+Alternatively, if you have permissions across multiple labs, you can redeploy multiple VMs for a lab:
1. Go to the [Azure Lab Services website](https://labs.azure.com). 1. Select a lab, and then go to the **Virtual machine pool** tab.
-1. Select one or multiple VMs from the list, and then select **Reset** in the toolbar.
-
- :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-vm-button.png" alt-text="Screenshot that shows the virtual machine pool in the Lab Services web portal, highlighting the Reset button.":::
+1. Select one or multiple VMs from the list, and then select **Redeploy** in the toolbar.
-1. On the **Reset virtual machine** dialog box, select **Reset**.
+ :::image type="content" source="./media/how-to-reset-and-redeploy-vm/redeploy-vm-button.png" alt-text="Screenshot that shows the virtual machine pool in the Lab Services web portal, highlighting the Redeploy button.":::
- :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-vms-dialog.png" alt-text="Screenshot that shows the reset virtual machine confirmation dialog in the Lab Services web portal.":::
+1. On the **Redeploy virtual machine** dialog box, select **Redeploy** to start the redeployment.
-## Redeploy VMs
-When you use lab plans, introduced in the [April 2022 Update](lab-services-whats-new.md), you can now also redeploy a lab VM. This operation is labeled **Troubleshoot** in the Azure Lab Services website and is available in the student's view of their VMs.
+You can also redeploy a lab VM by using the [REST api](/rest/api/labservices/virtual-machines/redeploy), [PowerShell](/powershell/module/az.labservices/start-azlabservicesvmredeployment), or the [.NET SDK](/dotnet/api/azure.resourcemanager.labservices.labvirtualmachineresource.redeploy).
-When you redeploy a lab VM, Azure Lab Services will shut down the VM, move the VM to a new node in within the Azure infrastructure, and then power it back on. You can think of a redeploy operation as a refresh of the underlying VM for your lab. All data that you saved in the [OS disk](/azure/virtual-machines/managed-disks-overview#os-disk) (usually the C: drive on Windows) of the VM will still be available after the redeploy operation. Any data on the [temporary disk](/azure/virtual-machines/managed-disks-overview#temporary-disk) (usually the D: drive on Windows) is lost after a redeploy operation.
+## Reimage a lab VM
-You can only redeploy a lab VM in the Azure Lab Services website that is assigned to you.
+When you reimage a lab VM, Azure Lab Services shuts down the lab VM, deletes it, and recreates a new lab VM from the original lab template. You can think of a reimage operation as a refresh of the entire VM.
-You can also redeploy a lab VM by using the [REST api](/rest/api/labservices/virtual-machines/redeploy), [PowerShell](/powershell/module/az.labservices/start-azlabservicesvmredeployment), or the [.NET SDK](/dotnet/api/azure.resourcemanager.labservices.labvirtualmachineresource.redeploy).
+You can reimage a lab VM that is assigned to you. If you have the Lab Assistant, or Lab Contributor role, you can reimage any lab VM for which you have permissions.
> [!WARNING]
-> After you redeploy a VM, all the data that you saved on the [temporary disk](/azure/virtual-machines/managed-disks-overview#temporary-disk) (D: drive by default on Windows) is lost.
+> After you reimage a lab VM, all the data that you saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. Learn how you can [store the user data outside the lab VM](./troubleshoot-access-lab-vm.md#store-user-data-outside-the-lab-vm).
-To redeploy a lab VM in the Azure Lab Services website:
+To reimage a lab VM that's assigned to you:
1. Go to the [Azure Lab Services website](https://labs.azure.com/virtualmachines) to view your virtual machines.
-1. For a specific lab VM, select **...** > **Troubleshoot**.
+1. For a specific lab VM, select **...** > **Reimage**.
- :::image type="content" source="./media/how-to-reset-and-redeploy-vm/redeploy-vms.png" alt-text="Screenshot that shows the Redeploy virtual machine menu option in the Lab Services web portal.":::
+ :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-single-vm.png" alt-text="Screenshot that shows how to reimage a lab VM in the Lab Services web portal, highlighting the Reimage button.":::
-1. On the **Troubleshoot virtual machine** dialog box, select **Redeploy**.
+1. On the **Reimage virtual machine** dialog box, select **Reimage**.
- :::image type="content" source="./media/how-to-reset-and-redeploy-vm/redeploy-single-vm-confirmation.png" alt-text="Screenshot that shows the confirmation dialog for redeploying a single VM in the Lab Services web portal.":::
+ :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-single-vm-confirmation.png" alt-text="Screenshot that shows the confirmation dialog for reimaging a single VM in the Lab Services web portal.":::
+
+Alternatively, if you have permissions across multiple labs, you can reimage multiple VMs for a lab:
+
+1. Go to the [Azure Lab Services website](https://labs.azure.com).
+
+1. Select a lab, and then go to the **Virtual machine pool** tab.
+
+1. Select one or multiple VMs from the list, and then select **Reimage** in the toolbar.
+
+ :::image type="content" source="./media/how-to-reset-and-redeploy-vm/reset-vm-button.png" alt-text="Screenshot that shows the virtual machine pool in the Lab Services web portal, highlighting the Reimage button.":::
+
+1. On the **Reimage virtual machine** dialog box, and then select **Reimage**.
+
+You can also reimage a lab VM by using the [REST api](/rest/api/labservices/virtual-machines/reimage), [PowerShell](/powershell/module/az.labservices/update-azlabservicesvmreimage), or the [.NET SDK](/dotnet/api/azure.resourcemanager.labservices.labvirtualmachineresource.reimage).
## Next steps
+- Learn more about [strategies for troubleshooting lab VMs](./troubleshoot-access-lab-vm.md).
- As a student, learn to [access labs](how-to-use-lab.md). - As a student, [connect to a VM](connect-virtual-machine.md).
lab-services Reliability In Azure Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reliability-in-azure-lab-services.md
Azure Lab Services is not currently zone aligned. So, VMs in a region may be dis
As a result, the following operations are not guaranteed in the event of a zone outage: - Manage or access labs/VMs-- Start/stop/reset VMs
+- Start/stop/reimage VMs
- Create/publish/delete labs - Scale up/down labs - Connect to VMs
lab-services Troubleshoot Access Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-access-lab-vm.md
In this article, you learn about the different approaches for troubleshooting la
## Prerequisites -- To change settings for the lab plan, your Azure account needs the Owner or Contributor Azure Active Directory role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+- To change settings for the lab plan, your Azure account needs the Owner or Contributor role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md).
-- To redeploy or reset a lab VM, you need to be either the lab user that is assigned to the VM, or your Azure account has the Owner, Contributor, Lab Creator, Lab Contributor, or Lab Operator role. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+- To redeploy or reimage a lab VM, you need to be either the lab user that is assigned to the VM, or your Azure account has the Owner, Contributor, Lab Creator, or Lab Contributor role. Learn more about the [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md).
## Symptoms
To use and access a lab VM, you connect to it by using Remote Desktop (RDP) or S
### Unable to connect to the lab VM with Remote Desktop (RDP) or Secure Shell (SSH)
-1. [Redeploy your lab VM](./how-to-reset-and-redeploy-vm.md#redeploy-vms) to another infrastructure node, while maintaining the user data.
+1. [Redeploy your lab VM](./how-to-reset-and-redeploy-vm.md#redeploy-a-lab-vm) to another infrastructure node, while maintaining the user data.
- This approach might help resolve issues with the underlying virtual machine. Learn more about [redeploying versus resetting a lab VM](#redeploy-versus-reset-a-lab-vm) and how they affect your user data.
+ This approach might help resolve issues with the underlying virtual machine. Learn more about [redeploying versus reimaging a lab VM](#redeploy-versus-reimage-a-lab-vm) and how they affect your user data.
1. [Verify your organization's firewall settings for your lab](./how-to-configure-firewall-settings.md) with the educator and IT admin. A change in the organization's firewall or network settings might prevent your computer to connect to the lab VM.
-1. If you still can't connect to the lab VM, [reset the lab VM](./how-to-reset-and-redeploy-vm.md#reset-vms).
+1. If you still can't connect to the lab VM, [reimage the lab VM](./how-to-reset-and-redeploy-vm.md#reimage-a-lab-vm).
> [!IMPORTANT]
- > Resetting a lab VM deletes the user data in the VM. Make sure to [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
+ > Reimaging a lab VM deletes the user data in the VM. Make sure to [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
### Unable to login with the credentials you used for creating the lab
The lab VM might be malfunctioning as a result of installing a software componen
1. If the lab VM uses Windows, you might use the Windows System Restore built-in functionality to undo a previous change to the operating system. Verify with an educator or IT admin how to use [System Restore](https://support.microsoft.com/windows/use-system-restore-a5ae3ed9-07c4-fd56-45ee-096777ecd14e).
-1. If the lab VM is still in an incorrect state, [reset the lab VM](./how-to-reset-and-redeploy-vm.md#reset-vms).
+1. If the lab VM is still in an incorrect state, [reimage the lab VM](./how-to-reset-and-redeploy-vm.md#reimage-a-lab-vm).
> [!IMPORTANT]
- > Resetting a lab VM deletes the user data in the VM. Make sure to [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
+ > Reimaging a lab VM deletes the user data in the VM. Make sure to [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
-## Redeploy versus reset a lab VM
+## Redeploy versus reimage a lab VM
-Azure Lab Services lets you redeploy, labeled *troubleshooting* in the Azure portal, or reset a lab VM. Both operations are similar, and result in the creation of a new virtual machine instance. However, there are fundamental differences that affect the user data on the lab VM.
+Azure Lab Services lets you redeploy or reimage a lab VM. Both operations are similar, and result in the creation of a new virtual machine instance. However, there are fundamental differences that affect the user data on the lab VM.
When you redeploy a lab VM, Azure Lab Services will shut down the VM, move the VM to a new node in within the Azure infrastructure, and then power it back on. You can think of a redeploy operation as a refresh of the underlying VM for your lab. All data that you saved in the [OS disk](/azure/virtual-machines/managed-disks-overview#os-disk) (usually the C: drive on Windows) of the VM will still be available after the redeploy operation. Any data on the [temporary disk](/azure/virtual-machines/managed-disks-overview#temporary-disk) (usually the D: drive on Windows) is lost after a redeploy operation and after a VM shutdown.
-Learn more about how to [redeploy a lab VM in the Azure Lab Services website](./how-to-reset-and-redeploy-vm.md#redeploy-vms).
+Learn more about how to [redeploy a lab VM in the Azure Lab Services website](./how-to-reset-and-redeploy-vm.md#redeploy-a-lab-vm).
-When you reset a lab VM, Azure Lab Services will shut down the VM, delete it, and recreate a new lab VM from the original template VM. You can think of a reset as a refresh of the entire VM. After you reset a lab VM, all the data that you saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. To avoid losing data on the VM, [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
+When you reimage a lab VM, Azure Lab Services will shut down the VM, delete it, and recreate a new lab VM from the original template VM. You can think of a reimage as a refresh of the entire VM. After you reimage a lab VM, all the data that you saved on the OS disk (usually the C: drive on Windows), and the temporary disk (usually the D: drive on Windows), is lost. To avoid losing data on the VM, [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
-Learn more about how to [reset a lab VM in the Azure Lab Services website](./how-to-reset-and-redeploy-vm.md#reset-vms).
+Learn more about how to [reimage a lab VM in the Azure Lab Services website](./how-to-reset-and-redeploy-vm.md#reimage-a-lab-vm).
> [!NOTE]
-> Redeploying a VM is only available for lab VMs that you created in a lab plan. VMs that are connected to a lab account only support the reset operation.
+> Redeploying a VM is only available for lab VMs that you created in a lab plan. VMs that are connected to a lab account only support the reimage operation.
## Store user data outside the lab VM
-When you reset a lab VM, all user data on the VM is lost. To avoid losing this data, you have to store the user data outside of the lab VM. You have different options to configure the template VM:
+When you reimage a lab VM, all user data on the VM is lost. To avoid losing this data, you have to store the user data outside of the lab VM. You have different options to configure the template VM:
- [Use OneDrive to store user data](./how-to-prepare-windows-template.md#install-and-configure-onedrive). - [Attach external file storage](./how-to-attach-external-storage.md), such as Azure Files or Azure NetApp Files.
Learn how to [set up a new lab](./tutorial-setup-lab.md#create-a-lab) and how to
## Next steps -- As a student, learn how to [reset or deploy lab VMs](./how-to-reset-and-redeploy-vm.md).
+- As a lab user, learn how to [reimage or redeploy lab VMs](./how-to-reset-and-redeploy-vm.md).
- As an admin or educator, [attach external file storage to a lab](./how-to-attach-external-storage.md).-- As an educator, [use OneDrive to store user data](./how-to-prepare-windows-template.md#install-and-configure-onedrive).
+- As a lab creator, [use OneDrive to store user data](./how-to-prepare-windows-template.md#install-and-configure-onedrive).
lab-services Tutorial Track Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-track-usage.md
In this tutorial, you do the following actions:
> [!div class="checklist"] > * View users registered with your lab > * View the usage of VMs in the lab
-> * Manage student VMs
+> * Manage lab VMs
## View registered users 1. Navigate to Lab Services web portal ([https://labs.azure.com](https://labs.azure.com)). 2. Select **Sign in** and enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts. 3. On the **My labs** page, select the lab for which you want to track the usage.
-4. Select **Users** on the left menu or **Users** tile. You see students who have registered with your lab.
+4. Select **Users** on the left menu or **Users** tile. You see the list of lab users who have registered with your lab.
![Registered users](./media/tutorial-track-usage/registered-users.png)
In this tutorial, you do the following actions:
## View the usage of VMs 1. Select **Virtual machines** on menu to the left.
-2. Confirm that you see the status of VMs and the number of hours the VMs have been running. The time that a lab owner spends on a student VM doesn't count against the usage time shown in the last column.
+2. Confirm that you see the status of VMs and the number of hours the VMs have been running. The time that a lab owner spends on a lab VM doesn't count against the usage time shown in the last column.
![VM usage](./media/tutorial-track-usage/vm-usage.png)
-## Manage student VMs
+## Manage lab VMs
-On this page, you can start, stop, or reset student VMs by using controls in the **State** column or on the toolbar.
+On this page, you can start, stop, or reimage lab user's VMs by using controls in the **State** column or on the toolbar.
![VM actions](./media/tutorial-track-usage/vm-controls.png) For more information about managing virtual machine pool for the lab, see [Set up and manage virtual machine pool](how-to-set-virtual-machine-passwords.md). > [!NOTE]
-> When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to the user outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users).
+> When an educator turns on a lab user's VM, quota for the lab user isn't affected. Quota for a user specifies the number of lab hours available to the user outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users).
## Next steps
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
--resource-group myResourceGroup \ --name myVM \ --nics myNic \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
Previously updated : 01/27/2023 Last updated : 09/27/2023 show_latex: true
Each Series in Own Group (1:1) | All Series in Single Group (N:1)
-| -- Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster
-More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb).
+More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-many-models-in-pipeline/automl-forecasting-demand-many-models-in-pipeline.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-hierarchical-timeseries-in-pipeline/automl-forecasting-demand-hierarchical-timeseries-in-pipeline.ipynb).
## Next steps
More general model groupings are possible via AutoML's Many-Models solution; see
* Learn about how AutoML creates [features from the calendar](./concept-automl-forecasting-calendar-features.md). * Learn about how AutoML creates [lag features](./concept-automl-forecasting-lags.md). * Read answers to [frequently asked questions](./how-to-automl-forecasting-faq.md) about forecasting in AutoML.
+
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
This tutorial uses [Azure Machine Learning Python SDK v2](/python/api/overview/a
* Complete the [Create resources to get started](quickstart-create-resources.md) to: * Create a workspace * [Create a cloud-based compute cluster](how-to-create-attach-compute-cluster.md#create) to use for training your model
-* Azure Machine Learning extension (preview) for Azure Pipelines. This extension can be installed from the Visual Studio marketplace at [https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.azureml-v2](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.azureml-v2).
-
- > [!TIP]
- >This extension isn't required to submit the Azure Machine Learning job; it's required to be able to wait for the job completion.
-
- [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
-
+* Azure Machine Learning extension for Azure Pipelines. This extension can be installed from the Visual Studio marketplace at [https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.azureml-v2](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.azureml-v2).
## Step 1: Get the code
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Previously updated : 09/27/2023 Last updated : 09/28/2023
To begin, you need an Azure subscription, CLI or SDK to interact with Azure Mach
For more information on how to create a new workspace or to upgrade your existing workspace to use a manged virtual network, see [Configure a managed virtual network to allow internet outbound](how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-internet-outbound).
- When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier to allow access via the private endpoint. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). Also, the workspace should be set with the `image_build_compute` property, as deployment creation involves building of images. For more information on setting the `image_build_compute` property for your workspace, see [Create a workspace that uses a private endpoint](how-to-configure-private-link.md#create-a-workspace-that-uses-a-private-endpoint).
+ When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier to allow access via the private endpoint. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). Also, the workspace should be set with the `image_build_compute` property, as deployment creation involves building of images. See [Configure image builds](how-to-managed-network.md#configure-image-builds) for more.
1. Configure the defaults for the CLI so that you can avoid passing in the values for your workspace and resource group multiple times.
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 08/22/2023 Last updated : 09/29/2023
Azure Machine Learning supports storage accounts configured to use either a priv
1. Select __Save__ to save the configuration. > [!TIP]
-> When using a private endpoint, you can also disable public access. For more information, see [disallow public read access](../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-public-read-access-for-a-storage-account).
+> When using a private endpoint, you can also disable anonymous access. For more information, see [disallow anonymous access](../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-anonymous-read-access-for-a-storage-account).
# [Service endpoint](#tab/se)
Azure Machine Learning supports storage accounts configured to use either a priv
1. Select __Save__ to save the configuration. > [!TIP]
-> When using a service endpoint, you can also disable public access. For more information, see [disallow public read access](../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-public-read-access-for-a-storage-account).
+> When using a service endpoint, you can also disable anonymous access. For more information, see [disallow anonymous access](../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-anonymous-read-access-for-a-storage-account).
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
Previously updated : 10/21/2021 Last updated : 09/21/2023
ONNX is an open-source format for AI models. ONNX supports interoperability betw
- [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb).
- [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application
In this sample, you use the .NET Core CLI to build your application but you can
## Add a reference to the ONNX model
-A way for the console application to access the ONNX model is to add it to the build output directory. To learn more about MSBuild common items, see the [MSBuild guide](/visualstudio/msbuild/common-msbuild-project-items).
+A way for the console application to access the ONNX model is to add it to the build output directory. To learn more about MSBuild common items, see the [MSBuild guide](/visualstudio/msbuild/common-msbuild-project-items). If you do not already have a model, follow [this notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-serverless.ipynb) to create an example model.
Add a reference to your ONNX model file in your application
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-pipelines-application-insights.md
Having your logs in once place will provide a history of exceptions and error me
```python pip install opencensus-ext-azure ```
-* Create an [Application Insights instance](../../azure-monitor/app/opencensus-python.md) (this doc also contains information on getting the connection string for the resource)
+* Create an [Application Insights instance](/previous-versions/azure/azure-monitor/app/opencensus-python) (this doc also contains information on getting the connection string for the resource)
## Getting Started
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md
Previously updated : 06/17/2022 Last updated : 09/29/2023
Azure Machine Learning supports storage accounts configured to use either a priv
1. Select __Save__ to save the configuration. > [!TIP]
-> When using a private endpoint, you can also disable public access. For more information, see [disallow public read access](../../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-public-read-access-for-a-storage-account).
+> When using a private endpoint, you can also disable anonymous access. For more information, see [disallow anonymous access](../../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-anonymous-read-access-for-a-storage-account).
# [Service endpoint](#tab/se)
Azure Machine Learning supports storage accounts configured to use either a priv
1. Select __Save__ to save the configuration. > [!TIP]
-> When using a service endpoint, you can also disable public access. For more information, see [disallow public read access](../../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-public-read-access-for-a-storage-account).
+> When using a service endpoint, you can also disable anonymous access. For more information, see [disallow anonymous access](../../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-anonymous-read-access-for-a-storage-account).
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Our support benefits include:
## Backup and restore
-Snapshot backups are enabled by default and taken every 24 hours. Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for the initial 2 backups. Additional backups will be charged, see [pricing](https://azure.microsoft.com/pricing/details/managed-instance-apache-cassandra/). To change the backup interval or retention period, or to restore from an existing backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+Snapshot backups are enabled by default and taken every 24 hours. Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for the initial 2 backups. Additional backups will be charged, see [pricing](https://azure.microsoft.com/pricing/details/managed-instance-apache-cassandra/). To change the backup interval or retention period, or to restore from an existing backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+
+> [!NOTE]
+> The time it takes to respond to a request to restore from backup will depend both on the severity of support case you raise, and the amount of data to be restored. For example, if you raise a Sev-A support case, the SLA for response to the ticket is 15 minutes. However, we do not provide an SLA for time to complete the restore, as this is very dependent on the volume of data being restored.
> [!WARNING] > Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
For more details on GIPK and its use cases with [Data-in-Replication](./concepts
- You can update the value of server parameter [sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key) to 'OFF' by following steps mentioned on how to update any server parameter from [Azure portal](./how-to-configure-server-parameters-portal.md#configure-server-parameters) or by using [Azure CLI](./how-to-configure-server-parameters-cli.md#modify-a-server-parameter-value). - Or you can connect to your Azure Database for MySQL Flexible Servers and run the below command.+ ```sql mysql> SET sql_generate_invisible_primary_key=OFF; ```
The following are unsupported:
- [CREATE TABLESPACE](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-tablespace) - [SHUTDOWN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_shutdown) - [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege isn't supported for taking backups using any [utility tools](../migrate/how-to-decide-on-right-migration-tools.md). Refer [Supported](././concepts-limitations.md#supported-1) section for list of supported [dynamic privileges](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#privileges-provided-dynamic).-- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, manually remove the `CREATE DEFINER` commands or use the `--skip-definer` command when performing a mysqldump.
+- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, manually remove the `CREATE DEFINER` commands or use the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).
- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionalities. You can't make changes to the `mysql` system database. - `SELECT ... INTO OUTFILE`: Not supported in the service. - ### Supported - `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you're using MySQL client version >= 8.0, you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.
For the complete list of feature comparisons between a single server and a flexi
- Understand [what's available for compute and storage options in flexible server](concepts-service-tiers-storage.md) - Learn about [Supported MySQL Versions](concepts-supported-versions.md) - Quickstart: [Use the Azure portal to create an Azure Database for MySQL - Flexible Server](quickstart-create-server-portal.md)++
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
Title: 'Tutorial: Monitor network communication with virtual machine scale set - Azure portal' description: In this tutorial, you'll learn how to use Azure Network Watcher connection monitor tool to monitor network communication with a virtual machine scale set using the Azure portal.-
-tags: azure-resource-manager
+ - Last updated 01/25/2023--
-# Customer intent: I need to monitor communication between a virtual machine scale set and a virtual machine. If the communication fails, I need to know why, so that I can resolve the problem.
+
+#CustomerIntent: I need to monitor communication between a virtual machine scale set and a virtual machine. If the communication fails, I need to know why, so that I can resolve the problem.
# Tutorial: Monitor network communication with a virtual machine scale set using the Azure portal
Connection monitors have these scale limits:
## Clean up resources
-When you no longer need the resources, delete the resource group and all the resources it contains:
+When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains:
+
+1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
-1. In the **Search** box at the top of the Azure portal, enter **myResourceGroup** and then, in the search results list, select it.
1. Select **Delete resource group**.
-1. For **Resource group name**, enter **myResourceGroup**, and then select **Delete**.
-## Next steps
+1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**.
-In this tutorial, you learned how to monitor a connection between a virtual machine scale set and a VM. You learned that a network security group rule prevented communication to a VM.
+1. Select **Delete** to confirm the deletion of the resource group and all its resources.
-To learn about all the different responses a connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address. See also:
+## Next step
-* [Analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts)
-* [Diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network)
+To learn how to diagnose and troubleshoot problems with virtual network gateways, advance to the next tutorial:
> [!div class="nextstepaction"] > [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)-
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
Title: 'Tutorial: Diagnose communication problem between virtual networks - Azure portal'
-description: In this tutorial, you learn how to use Azure Network Watcher VPN troubleshoot to diagnose a communication problem between two Azure virtual networks connected by Azure VPN gateways.
+description: In this tutorial, you learn how to use Azure Network Watcher VPN troubleshoot to diagnose a communication problem between virtual networks connected by VPN gateways.
Previously updated : 07/17/2023-
-# Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection.
Last updated : 09/28/2023+
+#CustomerIntent: As a network administrator, I want to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection.
# Tutorial: Diagnose a communication problem between virtual networks using the Azure portal
-Azure VPN gateway is a type of virtual network gateway that you can use to send encrypted traffic between an Azure virtual network and your on-premises locations over the public internet. You can also use VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. A VPN gateway allows you to create multiple connections to on-premises VPN devices and Azure VPN gateways. For more information about the number of connections that you can create with each VPN gateway SKU, see [Gateway SKUs](../../articles/vpn-gateway/vpn-gateway-about-vpngateways.md#gwsku). Whenever you need to troubleshoot an issue with a VPN gateway or one of its connections, you can use Azure Network Watcher VPN troubleshoot to help you checking the VPN gateway or its connections to find and resolve the problem in easy and simple steps.
+This tutorial shows you how to use Azure Network Watcher [VPN troubleshoot](network-watcher-troubleshoot-overview.md) capability to diagnose and troubleshoot a connectivity issue between two virtual networks. The virtual networks are connected via VPN gateways using VNet-to-VNet connections.
-This tutorial helps you use Azure Network Watcher [VPN troubleshoot](network-watcher-troubleshoot-overview.md) capability to diagnose and troubleshoot a connectivity issue that's preventing two virtual networks from communicating with each other. These two virtual networks are connected via VPN gateways using VNet-to-VNet connections.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create virtual networks
> * Create virtual network gateways (VPN gateways) > * Create connections between VPN gateways > * Diagnose and troubleshoot a connectivity issue
In this tutorial, you learn how to:
- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
+## Create VPN gateways
-## Create virtual networks
+In this section, you create two virtual network gateways to connect two virtual networks.
-In this section, you create two virtual networks that you connect later using virtual network gateways.
+### Create first VPN gateway
-### Create first virtual network
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results.
+1. In the search box at the top of the portal, enter ***virtual network gateways***. Select **Virtual network gateways** from the search results.
- :::image type="content" source="./media/diagnose-communication-problem-between-networks/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal.":::
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/virtual-network-gateway-azure-portal.png" alt-text="Screenshot shows searching for virtual network gateways in the Azure portal.":::
-1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab:
+1. Select **+ Create**. In **Create virtual network gateway**, enter or select the following values in the **Basics** tab:
| Setting | Value | | | | | **Project details** | | | Subscription | Select your Azure subscription. |
- | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. |
| **Instance details** | |
- | Name | Enter *myVNet1*. |
+ | Name | Enter ***VNet1GW***. |
| Region | Select **East US**. |
+ | Gateway type | Select **VPN**. |
+ | VPN type | Select **Route-based**. |
+ | SKU | Select **VpnGw1**. |
+ | Generation | Select **Generation1**. |
+ | Virtual network | Select **Create virtual network**. Enter ***myVNet1*** in **Name**. <br> Select **Create new** for the resource group. Enter ***myResourceGroup*** and select **OK**. <br> In **Address Range**, enter ***10.1.0.0/16***. <br> Under **Subnets**, enter ***GatewaySubnet*** for **Subnet name** and ***10.1.1.0/24*** for **Address range**. <br> Select **OK** to close **Create virtual network**. |
+ | **Public IP address** | |
+ | Public IP address | Select **Create new**. |
+ | Public IP address name | Enter ***VNet1GW-ip***. |
+ | Enable active-active mode | Select **Disabled**. |
+ | Configure BGP | Select **Disabled**. |
-1. Select the **IP Addresses** tab, or select **Next: IP Addresses** button at the bottom of the page.
-
-1. Enter the following values in the **IP Addresses** tab:
-
- | Setting | Value |
- | | |
- | IPv4 address space | Enter *10.1.0.0/16*. |
- | Subnet name | Enter *mySubnet*. |
- | Subnet address range | Enter *10.1.0.0/24*. |
-
-1. Select the **Review + create** tab or select the **Review + create** button at the bottom of the page.
+1. Select **Review + create**.
-1. Review the settings, and then select **Create**.
+1. Review the settings, and then select **Create**. A gateway can take 45 minutes or more to fully create and deploy.
-### Create second virtual network
+### Create second VPN gateway
-Repeat the previous steps to create the second virtual network using the following values:
+To create the second VPN gateway, repeat the previous steps you used to create the first VPN gateway with the following values:
| Setting | Value | | | |
-| Name | **myVNet2** |
-| IPv4 address space | **10.2.0.0/16** |
-| Subnet name | **mySubnet** |
-| Subnet address range | **10.2.0.0/24** |
+| Name | **VNet2GW**. |
+| Resource group | **myResourceGroup** |
+| Virtual network | **myVNet2** |
+| Virtual network address range | **10.2.0.0/16** |
+| Gateway subnet address range | **10.2.1.0/24** |
+| Public IP address name | **VNet2GW-ip** |
## Create a storage account and a container
In this section, you create a storage account, then you create a container in it
If you have a storage account that you want to use, you can skip the following steps and go to [Create VPN gateways](#create-vpn-gateways).
-1. In the search box at the top of the portal, enter *storage accounts*. Select **Storage accounts** in the search results.
+1. In the search box at the top of the portal, enter ***storage accounts***. Select **Storage accounts** in the search results.
1. Select **+ Create**. In **Create a storage account**, enter or select the following values in the **Basics** tab:
If you have a storage account that you want to use, you can skip the following s
| Setting | Value | | | |
- | Name | Enter *vpn*. |
+ | Name | Enter ***vpn***. |
| Public access level | Select **Private (no anonymous access)**. |
-## Create VPN gateways
-
-In this section, you create two VPN gateways that will be used to connect the two virtual networks you created previously.
-
-### Create first VPN gateway
-
-1. In the search box at the top of the portal, enter *virtual network gateways*. Select **Virtual network gateways** in the search results.
-
-1. Select **+ Create**. In **Create virtual network gateway**, enter or select the following values in the **Basics** tab:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your Azure subscription. |
- | **Instance details** | |
- | Name | Enter *VNet1GW*. |
- | Region | Select **East US**. |
- | Gateway type | Select **VPN**. |
- | VPN type | Select **Route-based**. |
- | SKU | Select **VpnGw1**. |
- | Generation | Select **Generation1**. |
- | Virtual network | Select **myVNet1**. |
- | Gateway subnet address range | Enter *10.1.1.0/27*. |
- | **Public IP address** | |
- | Public IP address | Select **Create new**. |
- | Public IP address name | Enter *VNet1GW-ip*. |
- | Enable active-active mode | Select **Disabled**. |
- | Configure BGP | Select **Disabled**. |
-
-1. Select **Review + create**.
-
-1. Review the settings, and then select **Create**. A gateway can take 45 minutes or more to fully create and deploy.
-
-### Create second VPN gateway
-
-To create the second VPN gateway, repeat the previous steps you used to create the first VPN gateway with the following values:
-
-| Setting | Value |
-| | |
-| Name | **VNet2GW**. |
-| Virtual network | **myVNet2**. |
-| Gateway subnet address range | **10.2.1.0/27**. |
-| Public IP address name | **VNet2GW-ip**. |
- ## Create gateway connections After creating **VNet1GW** and **VNet2GW** virtual network gateways, you can create connections between them to allow communication over secure IPsec/IKE tunnel between **VNet1** and **VNet2** virtual networks. To create the IPsec/IKE tunnel, you create two connections:
After creating **VNet1GW** and **VNet2GW** virtual network gateways, you can cre
| Setting | Value | | | |
- | Name | Enter *to-VNet2*. |
+ | Name | Enter ***to-VNet2***. |
| Connection type | Select **VNet-to-VNet**. | | Second virtual network gateway | Select **VNet2GW**. |
- | Shared key (PSK) | Enter *123*. |
+ | Shared key (PSK) | Enter ***123***. |
1. Select **OK**.
Fix the problem by correcting the key on **to-VNet1** connection to match the ke
1. Under **Settings**, select **Shared key**.
-1. In **Shared key (PSK)**, enter *123* and then select **Save**.
+1. In **Shared key (PSK)**, enter ***123*** and then select **Save**.
:::image type="content" source="./media/diagnose-communication-problem-between-networks/correct-shared-key.png" alt-text="Screenshot shows correcting and saving the shared key for of VPN connection in the Azure portal.":::
Fix the problem by correcting the key on **to-VNet1** connection to match the ke
## Clean up resources
-When no longer needed, delete the resource group and all of the resources it contains:
+When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains:
-1. Enter ***myResourceGroup*** in the search box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
1. Select **Delete resource group**.
When no longer needed, delete the resource group and all of the resources it con
1. Select **Delete** to confirm the deletion of the resource group and all its resources.
-## Next steps
-
-In this tutorial, you learned how to diagnose a connectivity problem between two connected virtual networks via VPN gateways. For more information about connecting virtual networks using VPN gateways, see [VNet-to-VNet connections](../../articles/vpn-gateway/design.md#V2V).
+## Next step
-To learn how to log network communication to and from a virtual machine so that you can review the log for anomalies, advance to the next tutorial.
+To learn how to log network communication to and from a virtual machine so that you can review the logs for anomalies, advance to the next tutorial.
> [!div class="nextstepaction"]
-> [Log network traffic to and from a VM](network-watcher-nsg-flow-logging-portal.md)
+> [Log network traffic to and from a virtual machine](nsg-flow-logs-tutorial.md)
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
Create a VM with [az vm create](/cli/azure/vm#az-vm-create). If SSH keys do not
az vm create \ --resource-group myResourceGroup \ --name myVm \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--generate-ssh-keys ```
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
Previously updated : 09/28/2023 Last updated : 09/29/2023
-# CustomerIntent: As an Azure administrator, I want to diagnose virtual machine (VM) network routing problem that prevents it from communicating with the internet.
+#CustomerIntent: As an Azure administrator, I want to diagnose virtual machine (VM) network routing problem that prevents it from communicating with the internet.
# Tutorial: Diagnose a virtual machine network routing problem using the Azure portal In this tutorial, You use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that the routing problem is caused by a [custom route](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json#custom-routes). + In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
If you prefer, you can diagnose a virtual machine network routing problem using the [Azure CLI](diagnose-vm-network-routing-problem-cli.md) or [Azure PowerShell](diagnose-vm-network-routing-problem-powershell.md) versions of the tutorial.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites -- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure account with an active subscription.
## Create a virtual network
In this section, you create a virtual network.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box at the top of the portal, enter ***virtual networks***. Select **Virtual networks** in the search results.
+1. In the search box at the top of the portal, enter ***virtual networks***. Select **Virtual networks** from the search results.
:::image type="content" source="./media/diagnose-vm-network-routing-problem/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal.":::
When no longer needed, delete **myResourceGroup** resource group and all of the
To learn how to monitor communication between two virtual machines, advance to the next tutorial: > [!div class="nextstepaction"]
-> [Monitor a network connection](monitor-vm-communication.md)
+> [Monitor network communication between virtual machines](monitor-vm-communication.md)
network-watcher Monitor Vm Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/monitor-vm-communication.md
Previously updated : 08/24/2023 Last updated : 09/29/2023+ #CustomerIntent: As an Azure administrator, I want to monitor the communication between two virtual machines in Azure so I can be alerted if the communication fails to take actions. I alow want to know why the communication failed, so that I can resolve the problem.
In this tutorial, you learn how to:
> * Monitor communication between the two virtual machines > * Diagnose a communication problem between the two virtual machines If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
- An Azure account with an active subscription.
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
- ## Create a virtual network In this section, you create **myVNet** virtual network with two subnets and an Azure Bastion host. The first subnet is used for the virtual machine, and the second subnet is used for the Bastion host.
-1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** from the search results.
:::image type="content" source="./media/monitor-vm-communication/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal.":::
The connection monitor you created in the previous section monitors the connecti
## Clean up resources
-When no longer needed, delete the resource group and all of the resources it contains:
+When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains:
-1. In the search box at the top of the portal, enter *myResourceGroup*. When you see **myResourceGroup** in the search results, select it.
+1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
1. Select **Delete resource group**.
-1. In **Delete a resource group**, enter *myResourceGroup*, and then select **Delete**.
+1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**.
1. Select **Delete** to confirm the deletion of the resource group and all its resources.
-## Next steps
-
-In this tutorial, you learned how to monitor a connection between two virtual machines. You learned that connection monitor detected the connection failure to port 22 on target virtual machine after you stopped it. To learn about all of the different metrics that connection monitor can return, see [Metrics in Azure Monitor](connection-monitor-overview.md#metrics-in-azure-monitor).
+## Next step
-To learn how to diagnose and troubleshoot problems with virtual network gateways, advance to the next tutorial.
+To learn how to monitor virtual machine scale set network communication, advance to the next tutorial:
> [!div class="nextstepaction"]
-> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
+> [Monitor network communication with a scale set](diagnose-communication-problem-between-networks.md)
network-watcher Network Insights Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-troubleshooting.md
Title: Azure Monitor Network Insights troubleshooting
-description: Troubleshooting steps for issues that may arise while using Network insights
---
+ Title: Troubleshoot network insights
+
+description: Learn how to troubleshoot some of the common issues that you may encounter when using Azure Monitor network insights.
Previously updated : 09/29/2022++ Last updated : 09/29/2023+
+#CustomerIntent: As an Azure administrator, I want learn how to troubleshoot some of the common issues that I may have when using Azure Monitor network insights so that I can resolve those issue.
-# Troubleshooting Network Insights
+# Troubleshoot network insights
-For general troubleshooting guidance, see the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+For general troubleshooting guidance, see [Troubleshooting workbook-based insights](../azure-monitor/insights/troubleshoot-workbooks.md).
-This section will help you diagnose and troubleshoot some common problems you might encounter when you use Azure Monitor Network Insights.
+This article helps you diagnose and troubleshoot some common problems you might encounter when you use Azure Monitor network insights.
## How do I resolve performance problems or failures?
-To learn about troubleshooting any networking-related problems you identify with Azure Monitor Network Insights, see the troubleshooting documentation for the malfunctioning resource.
-
-For more troubleshooting articles about these services, see the other articles in the Troubleshooting section of the table of contents for the service.
-- Application Gateway-- Azure ExpressRoute-- Azure Firewall-- Azure Private Link-- Connections-- Load Balancer-- Local Network Gateway-- Network Interface-- Network Security Groups-- Public IP addresses-- Route Table / UDR-- Traffic Manager-- Virtual Network-- Virtual Network NAT-- Virtual WAN-- ER/VPN Gateway-- Virtual Hub-
-## How do I make changes or add visualizations to Azure Monitor Network Insights?
+To learn about troubleshooting any networking-related problems you identify using Azure Monitor network insights, see the troubleshooting documentation for the malfunctioning resource.
+
+## How do I make changes or add visualizations to Azure Monitor network insights?
To make changes, select **Edit Mode** to modify the workbook. You can then save your changes as a new workbook that's tied to a designated subscription and resource group. ## What's the time grain after I pin any part of the workbooks?
-Azure Monitor Network Insights uses the **Auto** time grain, so the time grain is based on the selected time range.
+Azure Monitor network insights uses the **Auto** time grain, so the time grain is based on the selected time range.
## What's the time range when any part of a workbook is pinned? The time range depends on the dashboard settings.
-## What if I want to see other data or make my own visualizations? How can I make changes to Azure Monitor Network Insights?
+## What if I want to see other data or make my own visualizations? How can I make changes to Azure Monitor network insights?
You can edit the workbook you see in any side-panel or detailed metric view by using the edit mode. You can then save your changes as a new workbook.
-## Next steps
-- Learn more about network monitoring: [What is Azure Network Watcher?](../network-watcher/network-watcher-monitoring-overview.md)
+## Related content
+
+- To learn more about network insights, see [Azure Monitor network insights](network-insights-overview.md).
+- To learn more about Azure Network Watcher, see [What is Azure Network Watcher?](network-watcher-overview.md)
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
Title: Perform network intrusion detection with open source tools
+ Title: Perform network intrusion detection using open source tools
-description: This article describes how to use Azure Network Watcher and open source tools to perform network intrusion detection
+description: Learn how to use Azure Network Watcher and open source tools to perform network intrusion detection.
+ - Previously updated : 09/15/2022-- Last updated : 09/29/2023
-# Perform network intrusion detection with Network Watcher and open source tools
+# Perform network intrusion detection using Azure Network Watcher and open source tools
-Packet captures are a key component for implementing network intrusion detection systems (IDS) and performing Network Security Monitoring (NSM). There are several open source IDS tools that process packet captures and look for signatures of possible network intrusions and malicious activity. Using the packet captures provided by Network Watcher, you can analyze your network for any harmful intrusions or vulnerabilities.
+Packet captures are a key component for implementing network intrusion detection systems (IDS) and performing network security monitoring (NSM). There are several open source IDS tools that process packet captures and look for signatures of possible network intrusions and malicious activity. Using the packet captures provided by Azure Network Watcher, you can analyze your network for any harmful intrusions or vulnerabilities.
-One such open source tool is Suricata, an IDS engine that uses rulesets to monitor network traffic and triggers alerts whenever suspicious events occur. Suricata offers a multi-threaded engine, meaning it can perform network traffic analysis with increased speed and efficiency. For more details about Suricata and its capabilities, visit their website at https://suricata.io/.
+One such open source tool is Suricata, an IDS engine that uses rulesets to monitor network traffic and triggers alerts whenever suspicious events occur. Suricata offers a multi-threaded engine to perform network traffic analysis with increased speed and efficiency. For more details about Suricata and its capabilities, visit their website at https://suricata.io/.
## Scenario This article explains how to set up your environment to perform network intrusion detection using Network Watcher, Suricata, and the Elastic Stack. Network Watcher provides you with the packet captures used to perform network intrusion detection. Suricata processes the packet captures and trigger alerts based on packets that match its given ruleset of threats. These alerts are stored in a log file on your local machine. Using the Elastic Stack, the logs generated by Suricata can be indexed and used to create a Kibana dashboard, providing you with a visual representation of the logs and a means to quickly gain insights to potential network vulnerabilities.
-![simple web application scenario][1]
Both open source tools can be set up on an Azure VM, allowing you to perform this analysis within your own Azure network environment.
Both open source tools can be set up on an Azure VM, allowing you to perform thi
### Install Suricata
-For all other methods of installation, visit https://suricata.readthedocs.io/en/suricata-5.0.2/quickstart.html#installation
+For all other methods of installation, see [Suricata installation quickstart guide](https://suricata.readthedocs.io/en/suricata-5.0.2/quickstart.html#installation)
1. In the command-line terminal of your VM run the following commands:
For all other methods of installation, visit https://suricata.readthedocs.io/en/
### Download the Emerging Threats ruleset
-At this stage, we do not have any rules for Suricata to run. You can create your own rules if there are specific threats to your network you would like to detect, or you can also use developed rule sets from a number of providers, such as Emerging Threats, or VRT rules from Snort. We use the freely accessible Emerging Threats ruleset here:
+At this stage, we don't have any rules for Suricata to run. You can create your own rules if there are specific threats to your network you would like to detect, or you can also use developed rule sets from a number of providers, such as Emerging Threats, or VRT rules from Snort. We use the freely accessible Emerging Threats ruleset here:
Download the rule set and copy them into the directory:
While the logs that Suricata produces contain valuable information about what's
#### Install Elasticsearch
-1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have Java installed, refer to documentation on the [Azure-supported JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
+1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you don't have Java installed, refer to documentation on the [Azure-supported JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
1. Download the correct binary package for your system:
For this article, we have provided a sample dashboard for you to view trends and
You can also create your own visualizations and dashboards tailored towards metrics of your own interest. Read more about creating Kibana visualizations from Kibana's [official documentation](https://www.tutorialspoint.com/kibana/kibana_create_visualization.htm).
-![kibana dashboard][2]
### Visualize IDS alert logs
The sample dashboard provides several visualizations of the Suricata alert logs:
1. Alerts by GeoIP ΓÇô a map showing the distribution of alerts by their country/region of origin based on geographic location (determined by IP)
- ![geo ip][3]
+ :::image type="content" source="./media/network-watcher-intrusion-detection-open-source-tools/figure3.png" alt-text="Screenshot shows geo IP." lightbox="./media/network-watcher-intrusion-detection-open-source-tools/figure3.png":::
1. Top 10 Alerts ΓÇô a summary of the 10 most frequent triggered alerts and their description. Clicking an individual alert filters down the dashboard to the information pertaining to that specific alert.
- ![image 4][4]
+ :::image type="content" source="./media/network-watcher-intrusion-detection-open-source-tools/figure4.png" alt-text="Screenshot shows most frequent triggered alerts.":::
1. Number of Alerts ΓÇô the total count of alerts triggered by the ruleset
- ![image 5][5]
+ :::image type="content" source="./media/network-watcher-intrusion-detection-open-source-tools/figure5.png" alt-text="Screenshot shows the number of Alerts.":::
1. Top 20 Source/Destination IPs/Ports - pie charts showing the top 20 IPs and ports that alerts were triggered on. You can filter down on specific IPs/ports to see how many and what kind of alerts are being triggered.
- ![image 6][6]
+ :::image type="content" source="./media/network-watcher-intrusion-detection-open-source-tools/figure6.png" alt-text="Screenshot shows pie charts of the top 20 IPs and ports that alerts were triggered on." lightbox="./media/network-watcher-intrusion-detection-open-source-tools/figure6.png":::
1. Alert Summary ΓÇô a table summarizing specific details of each individual alert. You can customize this table to show other parameters of interest for each alert.
- ![image 7][7]
+ :::image type="content" source="./media/network-watcher-intrusion-detection-open-source-tools/figure7.png" alt-text="Screenshot shows a summary table with details about each individual alert." lightbox="./media/network-watcher-intrusion-detection-open-source-tools/figure7.png":::
For more documentation on creating custom visualizations and dashboards, see [Kibana's official documentation](https://www.elastic.co/guide/en/kibana/current/introduction.html).
For more documentation on creating custom visualizations and dashboards, see [Ki
By combining packet captures provided by Network Watcher and open source IDS tools such as Suricata, you can perform network intrusion detection for a wide range of threats. These dashboards allow you to quickly spot trends and anomalies within your network, as well dig into the data to discover root causes of alerts such as malicious user agents or vulnerable ports. With this extracted data, you can make informed decisions on how to react to and protect your network from any harmful intrusion attempts, and create rules to prevent future intrusions to your network.
-## Next steps
-
-Learn how to trigger packet captures based on alerts by visiting [Use packet capture to do proactive network monitoring with Azure Functions](network-watcher-alert-triggered-packet-capture.md)
-
-Learn how to visualize your NSG flow logs with Power BI by visiting [Visualize NSG flows logs with Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md)
-
+## Next step
+Learn how to trigger packet captures based on alerts:
-<!-- images -->
-[1]: ./media/network-watcher-intrusion-detection-open-source-tools/figure1.png
-[2]: ./media/network-watcher-intrusion-detection-open-source-tools/figure2.png
-[3]: ./media/network-watcher-intrusion-detection-open-source-tools/figure3.png
-[4]: ./media/network-watcher-intrusion-detection-open-source-tools/figure4.png
-[5]: ./media/network-watcher-intrusion-detection-open-source-tools/figure5.png
-[6]: ./media/network-watcher-intrusion-detection-open-source-tools/figure6.png
-[7]: ./media/network-watcher-intrusion-detection-open-source-tools/figure7.png
+> [!div class="nextstepaction"]
+> [Use packet capture to do proactive network monitoring with Azure Functions](network-watcher-alert-triggered-packet-capture.md)
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs using a Bicep file'
-description: Learn how to enable network security group (NSG) flow logs programmatically using Bicep and Azure PowerShell.
-
+ Title: 'Quickstart: Configure NSG flow logs using a Bicep file'
+
+description: In this quickstart, you learn how to enable NSG flow logs programmatically using a Bicep file to log the traffic flowing through a network security group.
Previously updated : 08/26/2022- -
-#Customer intent: I need to enable the network security group flow logs by using a Bicep file.
+ Last updated : 09/29/2023++
+#CustomerIntent: As an Azure administrator, I need to enable NSG flow logs using a Bicep file so that I can log the traffic flowing through a network security group.
-# Quickstart: Configure network security group flow logs using a Bicep file
+# Quickstart: Configure Azure Network Watcher NSG flow logs using a Bicep file
-In this quickstart, you learn how to enable [network security group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) by using a Bicep file
+In this quickstart, you learn how to enable [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) using a Bicep file
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
-We start with an overview of the properties of the NSG flow log object. We provide a sample Bicep file. Then, we deploy the Bicep file.
- ## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+- To deploy the Bicep files, either Azure CLI or PowerShell installed.
+
+ # [CLI](#tab/cli)
+
+ 1. [Install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands.
+
+ 1. Sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
+
+ # [PowerShell](#tab/powershell)
+
+ 1. [Install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets.
+
+ 1. Sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+
## Review the Bicep file
-The Bicep file that we use in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/networkwatcher-flowlogs-create/).
+This quickstart uses the [Create NSG flow logs](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/networkwatcher-flowLogs-create/main.bicep) Bicep template from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/networkwatcher-flowlogs-create/).
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.network/networkwatcher-flowLogs-create/main.bicep" range="1-67" highlight="51-67":::
-These resources are defined in the Bicep file:
+The following resources are defined in the Bicep file:
- [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-bicep) - [Microsoft.Network networkWatchers](/azure/templates/microsoft.network/networkwatchers?tabs=bicep&pivots=deployment-language-bicep) - [Microsoft.Network networkWatchers/flowLogs](/azure/templates/microsoft.network/networkwatchers/flowlogs?tabs=bicep&pivots=deployment-language-bicep)
-The highlighted code in the preceding sample shows an NSG flow resource definition.
+The highlighted code in the preceding sample shows an NSG flow log resource definition.
## Deploy the Bicep file
-This tutorial assumes that you have a network security group that you can enable flow logging on.
+This quickstart assumes that you have a network security group that you can enable flow logging on.
1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
- # [CLI](#tab/CLI)
+ # [CLI](#tab/cli)
```azurecli az group create --name exampleRG --location eastus az deployment group create --resource-group exampleRG --template-file main.bicep ```
- # [PowerShell](#tab/PowerShell)
+ # [PowerShell](#tab/powershell)
```azurepowershell New-AzResourceGroup -Name exampleRG -Location eastus
This tutorial assumes that you have a network security group that you can enable
- You will be prompted to enter the resource ID of the existing network security group. The syntax of the network security group resource ID is:
+ You'll be prompted to enter the resource ID of the existing network security group. The syntax of the network security group resource ID is:
```json "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/networkSecurityGroups/<network-security-group-name>"
You have two options to see whether your deployment succeeded:
- Your console shows `ProvisioningState` as `Succeeded`. - Go to the [NSG flow logs portal page](https://portal.azure.com/#blade/Microsoft_Azure_Network/NetworkWatcherMenuBlade/flowLogs) to confirm your changes.
-If there were issues with the deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../azure-resource-manager/troubleshooting/common-deployment-errors.md).
+If there are issues with the deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../azure-resource-manager/troubleshooting/common-deployment-errors.md).
## Clean up resources
-You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
+You can delete Azure resources using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
You also can disable an NSG flow log in the Azure portal: 1. Sign in to the Azure portal.
-1. Select **All services**. In the **Filter** box, enter **network watcher**. In the search results, select **Network Watcher**.
-1. Under **Logs**, select **NSG flow logs**.
-1. In the list of NSGs, select the NSG for which you want to disable flow logs.
-1. Under **Flow logs settings**, select **Off**.
-1. Select **Save**.
-## Next steps
+1. In the search box at the top of the portal, enter ***network watcher***. Select **Network Watcher** from the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In the list of flow logs, select the flow log that you want to disable.
+
+1. Select **Disable**.
+
+## Related content
-In this quickstart, you learned how to enable NSG flow logs by using a Bicep file. Next, learn how to visualize your NSG flow data by using one of these options:
+To learn how to visualize your NSG flow logs data, see:
-- [Microsoft Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md)-- [Open-source tools](network-watcher-visualize-nsg-flow-logs-open-source-tools.md)-- [Azure Traffic Analytics](traffic-analytics.md)
+- [Visualizing NSG flow logs using Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md).
+- [Visualize NSG flow logs using open source tools](network-watcher-visualize-nsg-flow-logs-open-source-tools.md).
+- [Traffic Analytics](traffic-analytics.md).
notification-hubs Browser Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/browser-push.md
+
+ Title: Send browser (web push) notifications with Azure Notification Hubs
+description: Learn about support for browser push notifications in Azure Notification Hubs.
+
+documentationcenter: .net
+++++
+ mobile-multiple
+ Last updated : 09/29/2023++
+ms.lastreviewed: 09/29/2023
++
+# Web push notifications with Azure Notification Hubs
+
+This article describes how to send browser push notifications to single users through Azure Notification Hubs.
+
+At a high level, the process is:
+
+1. [Set credentials](#set-credentials):
+ - [In the Azure portal](#set-credentials-in-azure-portal)
+ - [Using the REST API](#set-credentials-using-rest-api)
+ - Using the .NET SDK
+
+2. [Create registrations and installations](#create-registrations-and-installations).
+
+3. [Send push notifications](#send-push-notifications):
+ - [Direct sends](#create-direct-sends)
+ - [Batch (audience) sends](#create-audience-sends)
+ - [Debug/test sends](#create-debugtest-sends)
+
+## Overview
+
+Web push (or browser push) is a type of notification that customers get on their desktop browsers, or in some cases mobile browsers, on a per-website basis.
+
+Azure Notification Hubs now supports [*browser push*](https://developers.google.com/web/ilt/pwa/introduction-to-push-notifications) for all major browsers, including Microsoft Edge, Google Chrome, and Mozilla Firefox. Apple Safari isn't included. For Apple Safari, you can use existing APNS support as described in [Configuring Safari Push Notifications](https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/NotificationProgrammingGuideForWebsites/PushNotifications/PushNotifications.html#//apple_ref/doc/uid/TP40013225-CH3-SW1), with certificate-based authentication.
+
+Browser push is supported across platforms on devices with the following operating systems and browsers.
+
+Browser push support on laptop computers:
+
+| Operating system | Browsers |
+|||
+| Windows OS | Google Chrome v48+<br>Microsoft Edge v17+<br>Mozilla Firefox v44+<br>Safari v7+<br>Opera v42+ |
+| macOS | Chrome v48+<br>Firefox v44+<br>Safari v7+<br>Opera v42+ |
+| Linux OS | Chrome v48+<br>Firefox v44+<br>Safari v7+<br>Opera v42+ |
+
+Browser push support on tablet PCs:
+
+| Operating system | Browsers |
+||-|
+| Windows OS | Chrome v48+<br>Firefox v44+<br>Opera v42+ |
+| iOS | Not supported. |
+| Android OS | Chrome v48+<br>Firefox v44+<br>Opera v42+ |
+
+Browser push support on mobile devices:
+
+| Operating system | Browsers |
+||-|
+| iOS | Not supported. |
+| Android OS | Chrome v48+<br>Firefox v44+<br>Opera v42+ |
+
+## Set credentials
+
+To subscribe to browser push notifications on your web site, you can use VAPID keys. You can generate VAPID credentials by using services such as the [VAPID key generator](https://www.attheminute.com/vapid-key-generator/). The credentials should look similar to the following:
+
+```json
+{
+ "location": "South Central US",
+ "properties": {
+ "browserCredential": {
+ "properties": {
+ "subject": "mailto:email@microsoft.com",
+ "vapidPublicKey": "some-vapid-public-key",
+ "vapidPrivateKey":"some-vapid-private-key"
+ }
+ }
+ }
+}
+```
+
+### Set credentials in Azure portal
+
+You can set credentials for browser push in the Azure portal using the browser credentials.
+
+To set browser push credentials in the portal, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), open the **Browser (Web Push)** blade in your notification hub.
+
+ [![Screenshot showing the Browser (Web Push) blade in Notification Hubs.](media/browser-push/notification-hubs-browser-web-push.png)](media/browser-push/notification-hubs-browser-web-push.png#lightbox)
+
+1. Enter your existing VAPID keys, or generate a new VAPID key pair using a service such as the [VAPID Key Generator](https://www.attheminute.com/vapid-key-generator/).
+
+1. Select **Save**.
+
+### Set credentials using REST API
+
+You can also set the browser credentials for browser push by using the REST API, such as using the [Create Or Update Hub REST API](/rest/api/notificationhubs/notification-hubs/create-or-update) method, the Azure Resource Manager API, or the V2 RP.
+
+Enter the credentials in this format, providing the subscription ID, resource group, namespace, and notification hub:
+
+```http
+https://management.azure.com/subscriptions/{subcription}/resourceGroups/{resource-group}/providers/Microsoft.NotificationHubs/namespaces/{namespace}/notificationHubs/{hub}api-version=2016-03-01
+```
+
+## Create registrations and installations
+
+Bulk sends require registrations or installations. You can also use the registrations and installations in debug sends.
+
+The following examples show the registration request body for a native registration, a template registration, and a browser installation.
+
+### Native registration request body
+
+```xml
+<?xml version="1.0" encoding="utf-8"?><entry xmlns="http://www.w3.org/2005/Atom"><content type="application/xml"><BrowserRegistrationDescription xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect"><Endpoint></Endpoint><P256DH></P256DH><Auth></Auth></BrowserRegistrationDescription></content></entry>
+```
+
+### Browser template registration request body
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<entry xmlns="http://www.w3.org/2005/Atom">
+ <content type="application/xml">
+ <BrowserTemplateRegistrationDescription xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect">
+ <Endpoint></Endpoint>
+ <P256DH></P256DH>
+ <Auth></Auth>
+ <BodyTemplate><![CDATA[{"title":"asdf","message":"xts"}]]></BodyTemplate>
+ </BrowserTemplateRegistrationDescription>
+ </content>
+</entry>
+```
+
+### Installation request body
+
+```json
+{
+ "installationId": "installation-id",
+ "platform": "browser",
+ "pushChannel": {
+ "endpoint": "",
+ "p256dh": "",
+ "auth": ""
+ }
+}
+```
+
+### Create native registrations with .NET SDK
+
+To create a native registration, use the following statement:
+
+```csharp
+await notificationHubClient.CreateBrowserNativeRegistrationAsync(subscriptionInfo, tagSet);
+```
+
+### Create template registrations with .NET SDK
+
+To create a template registration using the .NET SDK, use the following statement:
+
+```csharp
+await notificationHubClient.CreateBrowserTemplateRegistrationAsync(subscriptionInfo, template, tagSet);
+```
+
+### Create browser installation with .NET SDK
+
+To create a browser installation using the .NET SDK, enter the following code:
+
+```csharp
+var browserPushSubscription = new BrowserPushSubscription
+ {
+ Endpoint = "",
+ P256DH = "",
+ Auth = "",
+ };
+var browserInstallation = new BrowserInstallation
+ {
+ InstallationId = installationId,
+ Tags = tags,
+ Subscription = browserPushSubscription,
+ UserId = userId,
+ ExpirationTime = DateTime.UtcNow.AddDays(1),
+ };
+await notificationHubClient.CreateOrUpdateInstallationAsync(browserInstallation);
+```
+
+## Send push notifications
+
+After you [set credentials for browser push](#set-credentials) and [create registrations and installations](#create-registrations-and-installations) for the devices, you're ready to create push notifications. This section describes how to create a notification for a [direct send|](#create-direct-sends), [audience send](#create-audience-sends), and [debug (test) send](#create-debugtest-sends).
+
+### Create direct sends
+
+For a direct send, you'll need the endpoint URI, p25DH key, and auth secret from a browser subscription. For more information about direct send notifications, see [Direct send](/rest/api/notificationhubs/direct-send).
+
+To create a direct send notification, follow these steps:
+
+1. Set the following headers for browser push:
+
+ - `ServiceBusNotification-Format - browser`
+ - `ServiceBusNotification-DeviceHandle - endpoint`: the `endpoint` field from the subscription
+ - `P256DH`: the `p256dh` field from the subscription
+ - `Auth`: the `auth` field from the subscription
+
+1. Create the message body. The message body is typically in this format:
+
+ ```json
+ {
+ "title": "Some Title",
+ "body": "Some body of a message"
+ }
+ ```
+
+ You can specify other fields in the body; for example, `icon`, to change the icon per message.
+
+1. Send the notification.
+
+ To create a direct send using the .NET SDK, use this code:
+
+ ```csharp
+ var browserSubscriptionEndpoint = "";
+ var browserPushHeaders = new Dictionary<string, string>
+ {
+ { "P256DH", "" },
+ { "Auth", "" },
+ };
+ var directSendOutcome = await notificationHubClient.SendDirectNotificationAsync(new BrowserNotification("payload", browserPushHeaders), browserSubscriptionEndpoint);
+ ```
+
+### Create audience sends
+
+For an audience send, use the same `ServiceBus Notification-Format` header used for a direct send, and modify the message payload as desired. Optionally, specify a tag expression using the `ServiceBusNotification-Tags` header. For more information about creating an audience send, see [Send an APNS native notification](/rest/api/notificationhubs/send-apns-native-notification).
+
+To create an audience send using the SDK, use the following statement:
+
+```csharp
+var outcome = await notificationHubClient.SendNotificationAsync(new BrowserNotification(payload, tagExpression);
+```
+
+### Create debug/test sends
+
+Debug sends are created in the Azure portal and require registrations and installations.
+
+After you [create registrations for the devices](#create-registrations-and-installations), follow these steps to create a debug send notification:
+
+1. In the [Azure portal](https://portal.azure.com), open the **Test Send** blade in your notification hub.
+
+ [![Screenshot showing the Test Send blade in a notification hub, for sending a test/debug notification.](media/browser-push/notification-hubs-test-send.png)](media/browser-push/notification-hubs-test-send.png#lightbox)
+
+1. In the **Platform** field, select **Browser**.
+
+1. Specify **Send to Tag Expression**.
+
+1. Modify **Payload** to your desired message.
+
+1. Select **Send**.
+
+## Next steps
+
+- [Find out more about direct sends](/rest/api/notificationhubs/direct-send)
+- [Send batches directly to a collection of device handles](/rest/api/notificationhubs/direct-batch-send)
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
az vm create --name ubuntu-jump \
--resource-group $RESOURCEGROUP \ --generate-ssh-keys \ --admin-username $VMUSERNAME \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--subnet $JUMPSUBNET \ --public-ip-address jumphost-ip \ --vnet-name $AROVNET
operator-nexus Quickstarts Tenant Workload Deployment Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment-ps.md
+
+ Title: Create an Azure Operator Nexus virtual machine by using Azure PowerShell
+description: Learn how to create an Azure Operator Nexus virtual machine (VM) for virtual network function (VNF) workloads using PowerShell
++++ Last updated : 09/20/2023+++
+# Quickstart: Create an Azure Operator Nexus virtual machine by using Azure PowerShell
+
+* Deploy an Azure Nexus virtual machine using Azure PowerShell
+
+This quick-start guide is designed to help you get started with using Nexus virtual machines to host virtual network functions (VNFs). By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus virtual machine that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to create and customize Nexus virtual machines for hosting virtual network functions.
+
+## Before you begin
+
+* Complete the [prerequisites](./quickstarts-tenant-workload-prerequisites.md) for deploying a Nexus virtual machine.
+
+## Create a Nexus virtual machine
+
+The following example creates a virtual machine named *myNexusVirtualMachine* in resource group *myResourceGroup* in the *eastus* location.
+
+Before you run the commands, you need to set several variables to define the configuration for your virtual machine. Here are the variables you need to set, along with some default values you can use for certain variables:
+
+| Variable | Description |
+| -- | |
+| LOCATION | The Azure region where you want to create your virtual machine. |
+| RESOURCE_GROUP | The name of the Azure resource group where you want to create the virtual machine. |
+| SUBSCRIPTION | The ID of your Azure subscription. |
+| CUSTOM_LOCATION | This argument specifies a custom location of the Nexus instance. |
+| CSN_ARM_ID | CSN ID is the unique identifier for the cloud services network you want to use. |
+| L3_NETWORK_ID | L3 Network ID is the unique identifier for the network interface to be used by the virtual machine. |
+| NETWORK_INTERFACE_NAME | The name of the L3 network interface for the virtual machine. |
+| ADMIN_USERNAME | The username for the virtual machine administrator. |
+| SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the virtual machine. |
+| CPU_CORES | The number of CPU cores for the virtual machine (even number, max 44 vCPUs) |
+| MEMORY_SIZE | The amount of memory (in GB, max 224 GB) for the virtual machine. |
+| VM_DISK_SIZE | The size (in GB) of the virtual machine disk. |
+| VM_IMAGE | The URL of the virtual machine image. |
+| ACR_URL | The URL of the Azure Container Registry. |
+| ACR_USERNAME | The username for the Azure Container Registry. |
+| ACR_PASSWORD | The password for the Azure Container Registry. |
+| VMDEVICEMODEL | The VMDeviceModel defaults to T2, available options T2(Modern) and T1(Transitional). |
+| USERDATA | The base64 encoded string of cloud-init userdata. |
+| BOOTMETHOD | The Method used to boot the virutalmachine UEFI or BIOS. |
+| OS_DISK_CREATE_OPTION | The OS disk create specifies ephemeral disk option. |
+| OS_DISK_DELETE_OPTION | The OS disk delete specifies delete disk option. |
+| IP_AllOCATION_METHOD | The IpAllocationMethod valid for L3Networks specify Dynamic or Static or Disabled. |
+| NETWORKATTACHMENTNAME | The name of the Network to attach for workload. |
+| NETWORKDATA | The base64 encoded string of cloud-init network data. |
+
+Once you've defined these variables, you can run the Azure PowerShell command to create the virtual machine. Add the ```-Debug``` flag at the end to provide more detailed output for troubleshooting purposes.
+
+To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example:
+
+```azurepowershell-interactive
+# Azure parameters
+$RESOURCE_GROUP="myResourceGroup"
+$SUBSCRIPTION="<Azure subscription ID>"
+$CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+$CUSTOM_LOCATION_TYPE="CustomLocation"
+$LOCATION="<ClusterAzureRegion>"
+
+# VM parameters
+$VM_NAME="myNexusVirtualMachine"
+$BOOT_METHOD="UEFI"
+$OS_DISK_CREATE_OPTION="Ephemeral"
+$OS_DISK_DELETE_OPTION="Delete"
+$NETWORKDATA="bmV0d29ya0RhdGVTYW1wbGU="
+$VMDEVICEMODEL="T2"
+$USERDATA=""
+
+# VM credentials
+$ADMIN_USERNAME="admin"
+$SSH_PUBLIC_KEY = @{
+ KeyData = "$(cat ~/.ssh/id_rsa.pub)"
+}
+
+# Network parameters
+$CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
+$L3_NETWORK_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
+$IP_AllOCATION_METHOD="Dynamic"
+$CSN_ATTACHMENT_DEFAULTGATEWAY="False"
+$CSN_ATTACHMENT_NAME="<l3Network-name>"
+$ISOLATE_EMULATOR_THREAD="True"
+$VIRTIOINTERFACE="Modern"
+$NETWORKATTACHMENTNAME="mgmt0"
+
+# VM Size parameters
+$CPU_CORES=4
+$MEMORY_SIZE=12
+$VM_DISK_SIZE="64"
+
+# Virtual Machine Image parameters
+$VM_IMAGE="<VM image, example: myacr.azurecr.io/ubuntu:20.04>"
+$ACR_URL="<Azure container registry URL, example: myacr.azurecr.io>"
+$ACR_USERNAME="<Azure container registry username>"
+
+$NETWORKATTACHMENT = New-AzNetworkCloudNetworkAttachmentObject `
+-AttachedNetworkId $L3_NETWORK_ID `
+-IpAllocationMethod $IP_AllOCATION_METHOD `
+-DefaultGateway "True" `
+-Name $NETWORKATTACHMENTNAME
+
+$SECUREPASSWORD = ConvertTo-SecureString "<YourPassword>" -asplaintext -force
+```
+
+> [!IMPORTANT]
+> It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, L3_NETWORK_ID and ACR parameters with your actual values before running these commands.
+
+After defining these variables, you can create the virtual machine by executing the following Azure PowerShell command.
+
+```azurepowershell-interactive
+New-AzNetworkCloudVirtualMachine -Name $VM_NAME `
+-ResourceGroupName $RESOURCE_GROUP `
+-AdminUsername $ADMIN_USERNAME `
+-CloudServiceNetworkAttachmentAttachedNetworkId $CSN_ARM_ID `
+-CloudServiceNetworkAttachmentIPAllocationMethod $IP_AllOCATION_METHOD `
+-CpuCore $CPU_CORES `
+-ExtendedLocationName $CUSTOM_LOCATION `
+-ExtendedLocationType $CUSTOM_LOCATION_TYPE `
+-Location $LOCATION `
+-SubscriptionId $SUBSCRIPTION `
+-MemorySizeGb $MEMORY_SIZE `
+-OSDiskSizeGb $VM_DISK_SIZE `
+-VMImage $VM_IMAGE `
+-BootMethod $BOOT_METHOD `
+-CloudServiceNetworkAttachmentDefaultGateway $CSN_ATTACHMENT_DEFAULTGATEWAY `
+-CloudServiceNetworkAttachmentName $CSN_ATTACHMENT_NAME `
+-IsolateEmulatorThread $ISOLATE_EMULATOR_THREAD `
+-NetworkAttachment $NETWORKATTACHMENT `
+-NetworkData $NETWORKDATA `
+-OSDiskCreateOption $OS_DISK_CREATE_OPTION `
+-OSDiskDeleteOption $OS_DISK_DELETE_OPTION `
+-SshPublicKey $SSH_PUBLIC_KEY `
+-UserData $USERDATA `
+-VMDeviceModel $VMDEVICEMODEL `
+-VMImageRepositoryCredentialsUsername $ACR_USERNAME `
+-VMImageRepositoryCredentialsPassword $SECUREPASSWORD `
+-VMImageRepositoryCredentialsRegistryUrl $ACR_URL
+```
+
+After a few minutes, the command completes and returns information about the virtual machine. You've created the virtual machine. You're now ready to use them.
+
+> [!NOTE]
+> If each server has two CPU chipsets and each CPU chip has 28 cores, then with hyperthreading enabled (default), the CPU chip supports 56 vCPUs. With 8 vCPUs in each chip reserved for infrastructure (OS and agents), the remaining 48 are available for tenant workloads.
+
+## Review deployed resources
++
+## Clean up resources
++
+## Next steps
+
+You've successfully created a Nexus virtual machine. You can now use the virtual machine to host virtual network functions (VNFs).
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
After a few minutes, the command completes and returns information about the vir
## Clean up resources ## Next steps
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
Create an L2 network, if necessary, for your workloads. You can repeat the instr
Gather the resource ID of the L2 isolation domain that you [created](#l2-isolation-domain) to configure the VLAN for this network.
-Here's an example Azure CLI command:
+### [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az networkcloud l2network create --name "<YourL2NetworkName>" \ --resource-group "<YourResourceGroupName>" \ --subscription "<YourSubscription>" \
Here's an example Azure CLI command:
--l2-isolation-domain-id "<YourL2IsolationDomainId>" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzNetworkCloudL2Network -Name "<YourL2NetworkName>" `
+-ResourceGroupName "<YourResourceGroupName>" `
+-ExtendedLocationName "<ClusterCustomLocationId>" `
+-ExtendedLocationType "CustomLocation" `
+-L2IsolationDomainId "<YourL2IsolationDomainId>" `
+-Location "<ClusterAzureRegion>" `
+-InterfaceName "<InterfaceName>" `
+-Subscription "<YourSubscription>"
+```
+++ #### Create an L3 network Create an L3 network, if necessary, for your workloads. Repeat the instructions for each required L3 network.
You need:
- The `ip-allocation-type` value, which can be `IPv4`, `IPv6`, or `DualStack` (default). - The `vlan` value, which must match what's in the L3 isolation domain.
-```azurecli
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
az networkcloud l3network create --name "<YourL3NetworkName>" \ --resource-group "<YourResourceGroupName>" \ --subscription "<YourSubscription>" \
You need:
--vlan <YourNetworkVlan> ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzNetworkCloudL3Network -Name "<YourL3NetworkName>" `
+-ResourceGroupName "<YourResourceGroupName>" `
+-Subscription "<YourSubscription>" `
+-Location "<ClusterAzureRegion>" `
+-ExtendedLocationName "<ClusterCustomLocationId>" `
+-ExtendedLocationType "CustomLocation" `
+-Vlan "<YourNetworkVlan>" `
+-L3IsolationDomainId "<YourL3IsolationDomainId>" `
+-Ipv4ConnectedPrefix "<YourNetworkIpv4Prefix>" `
+-Ipv6ConnectedPrefix "<YourNetworkIpv6Prefix>"
+```
+++ #### Create a trunked network Create a trunked network, if necessary, for your VM. Repeat the instructions for each required trunked network. Gather the `resourceId` values of the L2 and L3 isolation domains that you created earlier to configure the VLANs for this network. You can include as many L2 and L3 isolation domains as needed.
-```azurecli
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \ --resource-group "<YourResourceGroupName>" \ --subscription "<YourSubscription>" \
Gather the `resourceId` values of the L2 and L3 isolation domains that you creat
"<YourL3IsolationDomainId3>" \ --vlans <YourVlanList> ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzNetworkCloudTrunkedNetwork -Name "<YourTrunkedNetworkName>" `
+-ResourceGroupName "<YourResourceGroupName>" `
+-SubscriptionId "<YourSubscription>" `
+-ExtendedLocationName "<ClusterCustomLocationId>" `
+-ExtendedLocationType "CustomLocation" `
+-Location "<ClusterAzureRegion>" `
+-IsolationDomainId "<YourL3IsolationDomainId>" `
+-InterfaceName "<YourNetworkInterfaceName>" `
+-Vlan "<YourVlanList>"
+```
++ #### Create a cloud services network Your VM requires at least one cloud services network. You need the egress endpoints that you want to add to the proxy for your VM to access. This list should include any domains needed to pull images or access data, such as `.azurecr.io` or `.docker.io`.
-```azurecli
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \ --resource-group "<YourResourceGroupName >" \ --subscription "<YourSubscription>" \
Your VM requires at least one cloud services network. You need the egress endpoi
--additional-egress-endpoints "[{\"category\":\"<YourCategory >\",\"endpoints\":[{\"<domainName1 >\":\"< endpoint1 >\",\"port\":<portnumber1 >}]}]" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$endpointEgressList = @()
+$endpointList = @()
+$endpoint = New-AzNetworkCloudEndpointDependencyObject `
+ -DomainName "<domainName1>" `
+ -Port "<portnumber1>"
+$endpointList+= $endpoint
+$additionalEgressEndpoint = New-AzNetworkCloudEgressEndpointObject `
+ -Category "YourCategory" `
+ -Endpoint $endpointList
+$endpointEgressList+= $additionalEgressEndpoint
+
+New-AzNetworkCloudServicesNetwork -CloudServicesNetworkName "<YourCloudServicesNetworkName>" `
+-ResourceGroupName "<YourResourceGroupName>" `
+-Subscription "<YourSubscription>" `
+-ExtendedLocationName "<ClusterCustomLocationId>" `
+-ExtendedLocationType "CustomLocation" `
+-Location "<ClusterAzureRegion>" `
+-AdditionalEgressEndpoint $endpointEgressList `
+-EnableDefaultEgressEndpoint "False"
+```
+++ #### Using the proxy to reach outside of the virtual machine Once you have created your VM or Kubernetes cluster with this cloud services network, you can use the proxy to reach outside of the virtual machine. Proxy is useful if you need to access resources outside of the virtual machine, such as pulling images or accessing data.
If you don't specify a zone when you're creating a Nexus Kubernetes cluster, the
To get the list of available zones in the Azure Operator Nexus instance, you can use the following command:
-```azurecli
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
az networkcloud cluster show \ --resource-group <Azure Operator Nexus on-premises cluster resource group> \ --name <Azure Operator Nexus on-premises cluster name> \ --query computeRackDefinitions[*].availabilityZone ```+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Get-AzNetworkCloudCluster -Name "<Azure Operator Nexus on-premises cluster name>" `
+-ResourceGroupName "<Azure Operator Nexus on-premises cluster resource group>" `
+-SubscriptionId "<YourSubscription>" `
+| Select -ExpandProperty ComputeRackDefinition `
+| Select-Object -Property AvailabilityZone
+```
++
public-multi-access-edge-compute-mec Quickstart Create Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-cli.md
In this quickstart, you learn how to use Azure CLI to deploy a Linux virtual mac
The following example creates a VM named myVMEdge and adds a user account named azureuser at Azure public MEC: ```azurecli
- az vm create \--resource-group myResourceGroup \--name myVMEdge \--image UbuntuLTS \--admin-username azureuser \--admin-password <password> \--edge-zone <edgezone ID> \--public-ip-sku Standard
+ az vm create \--resource-group myResourceGroup \--name myVMEdge \--image Ubuntu2204 \--admin-username azureuser \--admin-password <password> \--edge-zone <edgezone ID> \--public-ip-sku Standard
``` The `--edge-zone` parameter determines the Azure public MEC location where the VM and its associated resources are created. Because Azure public MEC supports only standard SKU for a public IP, you must specify `Standard` for the `--public-ip-sku` parameter.
To use SSH to connect to the VM in Azure public MEC, the best method is to deplo
The following example creates a VM named myVMRegion in the region: ```azurecli
- az vm create --resource-group myResourceGroup --name myVMRegion --image UbuntuLTS --admin-username azureuser --admin-password <password> --vnet-name MyVnetRegion --subnet MySubnetRegion --public-ip-sku Standard
+ az vm create --resource-group myResourceGroup --name myVMRegion --image Ubuntu2204 --admin-username azureuser --admin-password <password> --vnet-name MyVnetRegion --subnet MySubnetRegion --public-ip-sku Standard
``` 1. Note your `publicIpAddress` value in the output from the myVMregion VM. Use this address to access the VM in the next sections.
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
# About the move process
-[Azure Resource Mover](overview.md) helps you to move Azure resources across Azure regions. This article summarizes the components used by Resource Mover and describes the move process.
+[Azure Resource Mover](overview.md) helps you to move Azure resources across Azure regions.
+
+This article summarizes the components used by Resource Mover and describes the move process.
## Components
The table summarizes what's impacted when you're moving across regions.
## Next steps
-[Move](tutorial-move-region-virtual-machines.md) Azure VMs to another region.
-[Move](tutorial-move-region-sql.md) Azure SQL resources to another region.
+- [Move](tutorial-move-region-virtual-machines.md) Azure VMs to another region.
+- [Move](tutorial-move-region-sql.md) Azure SQL resources to another region.
resource-mover Move Region Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-availability-zone.md
Previously updated : 02/10/2023 Last updated : 09/29/2023 #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
If you want to move VMs to a different availability zone in the same region, [re
- The subscription needs enough quota to create the source resources in the target region. If it doesn't, request additional limits. [Learn more](../azure-resource-manager/management/azure-subscription-service-limits.md). - Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you. -- ## Check VM requirements 1. Check that the VMs you want to move are supported.
If you want to move VMs to a different availability zone in the same region, [re
## Select resources to move
-Select resources you want to move.
--- You can select any supported resource type across resource groups in the source region you select.-- You move resources to a target region in the source region subscription. If you want to change the subscription, you can do that after the resources are moved.
+- You can select any supported resource type across resource groups in the source region you have selected.
+- You can move resources to a target region in the source region subscription. If you want to change the subscription, you can do that after the resources are moved.
1. In the Azure portal, search for resource mover. Then, under **Services**, select **Azure Resource Mover**.
Select resources you want to move.
![Button to get started](./media/move-region-availability-zone/get-started.png) 3. In **Move resources** > **Source + destination**, select the source subscription and region.
-4. In **Destination**, select the region to which you want to move the VMs. Then click **Next**.
+4. In **Destination**, select the region to which you want to move the VMs. Then select **Next**.
![Page to fill in source and destination subscription/region](./media/move-region-availability-zone/source-target.png)
-6. In **Resources to move**, click **Select resources**.
-7. In **Select resources**, select the VM. You can only add resources supported for move. Then click **Done**. In **Resources to move**, click **Next**.
+6. In **Resources to move**, select **Select resources**.
+7. In **Select resources**, select the VM. You can only add resources supported for move. Then select **Done**. In **Resources to move**, select **Next**.
![Page to select VMs to move](./media/move-region-availability-zone/select-vm.png) 8. In **Review + Add**, check the source and destination settings. ![Page to review settings and proceed with move](./media/move-region-availability-zone/review.png)
-9. Click **Proceed**, to begin adding the resources.
-10. After the add process finishes successfully, click **Adding resources for move** in the notification icon.
+9. Select **Proceed**, to begin adding the resources.
+10. After the add process finishes successfully, select **Adding resources for move** in the notification icon.
![Message in notifications](./media/move-region-availability-zone/notification.png)
-After clicking the notification, resources appear on the **Across regions** page
- > [!NOTE]
-> After clicking the notification, resources appear on the **Across regions** page, in a *Prepare pending* state.
+> After selecting the notification, resources appear on the **Across regions** page, in a *Prepare pending* state.
> - If you want to remove an resource from a move collection, the method for doing that depends on where you are in the move process. [Learn more](remove-move-resources.md). ## Resolve dependencies 1. Dependencies are auto-validated at the beginning when you add the resources. In case initial auto validation does not resolves the issue, the **Validate dependencies** ribbon appears. Select the ribbon to validate manually.
-2. If dependencies are found, click **Add dependencies**.
+2. If dependencies are found, select **Add dependencies**.
3. In **Add dependencies**, select the dependent resources > **Add dependencies**. Monitor progress in the notifications. ![Button to add dependencies](./media/move-region-availability-zone/add-dependencies.png)
Before you can prepare and move VMs, the source resource group must be present i
Prepare as follows: 1. In **Across regions**, select the source resource group > **Prepare**.
-2. In **Prepare resources**, click **Prepare**.
+2. In **Prepare resources**, select **Prepare**.
![Button to prepare the source resource group](./media/move-region-availability-zone/prepare-resource-group.png)
Prepare as follows:
Initiate the move as follows: 1. In **Across regions**, select the resource group > **Initiate Move**
-2. ln **Move Resources**, click **Initiate move**. The resource group moves into an *Initiate move in progress* state.
+2. ln **Move Resources**, select **Initiate move**. The resource group moves into an *Initiate move in progress* state.
3. After initiating the move, the target resource group is created, based on the generated ARM template. The source resource group moves into a *Commit move pending* state. ![Status showing commit move](./media/move-region-availability-zone/commit-move-pending.png)
Initiate the move as follows:
To commit and finish the move process: 1. In **Across regions**, select the resource group > **Commit move**
-2. ln **Move Resources**, click **Commit**.
+2. ln **Move Resources**, select **Commit**.
> [!NOTE] > After committing the move, the source resource group is in a *Delete source pending* state.
To commit and finish the move process:
Before we move the rest of the resources, we'll set a target availability zone for the VM.
-1. In the **Across regions** page, click the link in the **Destination configuration** column of the VM you're moving.
+1. In the **Across regions** page, select the link in the **Destination configuration** column of the VM you're moving.
![VM properties](./media/move-region-availability-zone/select-vm-settings.png)
Now that the source resource group is moved, you can prepare to move the other r
With resources prepared, you can now initiate the move.
-1. In **Across regions**, select resources with state *Initiate move pending*. Then click **Initiate move**
-2. In **Move resources**, click **Initiate move**.
+1. In **Across regions**, select resources with state *Initiate move pending*. Then select **Initiate move**
+2. In **Move resources**, select **Initiate move**.
![Page to initiate move of resources](./media/move-region-availability-zone/initiate-move.png)
After the initial move, you can decide whether you want to commit the move, or t
You can discard the move as follows:
-1. In **Across regions**, select resources with state *Commit move pending*, and click **Discard move**.
-2. In **Discard move**, click **Discard**.
+1. In **Across regions**, select resources with state *Commit move pending*, and select **Discard move**.
+2. In **Discard move**, select **Discard**.
3. Track move progress in the notifications bar.
You can discard the move as follows:
If you want to complete the move process, commit the move.
-1. In **Across regions**, select resources with state *Commit move pending*, and click **Commit move**.
-2. In **Commit resources**, click **Commit**.
+1. In **Across regions**, select resources with state *Commit move pending*, and select **Commit move**.
+2. In **Commit resources**, select **Commit**.
![Page to commit resources to finalize move](./media/move-region-availability-zone/commit-resources.png)
The Mobility service isn't uninstalled automatically from VMs. Uninstall it manu
After the move, you can optionally delete resources in the source region.
-1. In **Across Regions**, click the name of each source resource that you want to delete.
+1. In **Across Regions**, select the name of each source resource that you want to delete.
2. In the properties page for each resource, select **Delete**. ## Delete additional resources created for move
resource-mover Move Region Within Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-within-resource-group.md
Select resources you want to move. You move resources to a target region in the
Resources you're moving appear in the **Across regions** page, in a *Prepare pending* state. Start validation as follows:
-1. Dependencies are *auto validated* at the beginning when you add the resources. If the initial auto validation does not resolves the issue, you will see a **Validate dependencies** ribbon, select it to validate manually.
+1. Dependencies are validated in the background after you add them. If you see a **Validate dependencies** button, select it to trigger the manual validation.
![Button to validate dependencies](./media/move-region-within-resource-group/validate-dependencies.png)
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
To select the resources, do the following:
To resolve dependencies before the move, follow these steps:
-1. Dependencies are automatically validated in the background when you add the resources. If you still see the **Validate dependencies** option, select it to trigger the validation manually.
+1. Dependencies are validated in the background after you add them. If you see a **Validate dependencies** button, select it to trigger the manual validation.
:::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png" alt-text="Screenshot showing the 'Validate dependencies' button." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/check-dependencies.png":::
resource-mover Tutorial Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-sql.md
To select the resources you want to move, follow these steps:
To resolve the dependent resources you want to move, follow these steps:
-1. Dependencies are automatically validated in the background when you add the resources. If you still see the **Validate dependencies** option, select it to trigger the validation manually.
+1. Dependencies are auto-validated in the background when you add the resources. If the initial auto validation does not resolve the issue, you will see a **Validate dependencies** option, select it to validate manually.
1. If dependencies are found, select **Add dependencies**. :::image type="content" source="./media/tutorial-move-region-sql/add-dependencies.png" alt-text="Screenshot displays button to add dependencies." lightbox="./media/tutorial-move-region-sql/add-dependencies.png"::: 3. In **Add dependencies**, select the dependent resources > **Add dependencies**. You can monitor the progress in the notifications.
-4. Dependencies are automatically validated in the background once you add the dependencies. If you see a **Validate dependencies** option, select it to trigger the manual validation.
+4. Dependencies are auto-validated in the background once you add the dependencies. If you see a **Validate dependencies** option, select it to trigger the manual validation.
5. On the **Across regions** page, verify that the resources are now in a *Prepare pending* state with no issues.
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
To select the resources you want to move, follow these steps:
To resolve dependencies before the move, follow these steps:
-1. Dependencies are automatically validated in the background when you add the resources. If you still see the **Validate dependencies** option, select it to trigger the validation manually.
+1. Dependencies are automatically validated in the background when you add the resources. If you still see the **Validate dependencies** option, select it to trigger the validation manually.
2. If dependencies are found, select **Add dependencies** to add them. 3. On **Add dependencies**, retain the default **Show all dependencies** option.
To resolve dependencies before the move, follow these steps:
:::image type="content" source="./media/tutorial-move-region-virtual-machines/add-dependencies.png" alt-text="Screenshot displays add dependencies page." lightbox="./media/tutorial-move-region-virtual-machines/add-dependencies.png":::
-4. Dependencies are automatically validated in the background once you add them. If you see a **Validate dependencies** button, select it to trigger the manual validation.
+4. Dependencies are validated in the background after you add them. If you see a **Validate dependencies** button, select it to trigger the manual validation.
:::image type="content" source="./media/tutorial-move-region-virtual-machines/add-additional-dependencies.png" alt-text="Screenshot displays page to add additional dependencies." lightbox="./media/tutorial-move-region-virtual-machines/add-additional-dependencies.png":::
search Hybrid Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-overview.md
+
+ Title: Hybrid search
+
+description: Describes concepts and architecture of hybrid query processing and document retrieval. Hybrid queries combine vector search and full text search.
+++++ Last updated : 09/27/2023++
+# Hybrid search using vectors and full text in Azure Cognitive Search
+
+> [!IMPORTANT]
+> Hybrid search uses the [vector features](vector-search-overview.md) currently in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Hybrid search is a combination of full text and vector queries that execute against a search index that contains both searchable plain text content and generated embeddings. For query purposes, hybrid search is:
+++ A single query request that includes `search` and `vectors` parameters, multiple vector queries, or one vector query targeting multiple fields++ Parallel query execution++ Merged results in the query response, scored using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md)+
+This article explains the concepts, benefits, and limitations of hybrid search.
+
+## How does hybrid search work?
+
+In Azure Cognitive Search, vector indexes containing embeddings can live alongside textual and numerical fields allowing you to issue hybrid full text and vector queries. Hybrid queries can take advantage of existing functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) in a single search request.
+
+Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 and cosine similarity. To present these results in a single ranked list, a method of merging the ranked result lists is needed.
+
+## Structure of a hybrid query
+
+Hybrid search is predicated on having a search index that contains fields of various types, including plain text and numbers, geo coordinates for geospatial search, and vectors for a mathematical representation of a chunk of text or image, audio, and video. You can use almost all query capabilities in Cognitive Search with a vector query, except for client-side interactions such as autocomplete and suggestions.
+
+A representative hybrid query might be as follows (notice the vector is trimmed for brevity):
+
+```http
+POST https://{{searchServiceName}}.search.windows.net/indexes/hotels-vector-quickstart/docs/search?api-version=2023-07-01-Preview
+ content-type: application/JSON
+{
+ "count": true,
+ "search": "historic hotel walk to restaurants and shopping",
+ "select": "HotelId, HotelName, Category, Description, Address/City, Address/StateProvince",
+ "filter": "geo.distance(Location, geography'POINT(-77.03241 38.90166)') le 300",
+ "facets": [ "Address/StateProvince"],
+ "vectors": [
+ {
+ "value": [ <array of embeddings> ]
+ "k": 7,
+ "fields": "DescriptionVector"
+ },
+ {
+ "value": [ <array of embeddings> ]
+ "k": 7,
+ "fields": "Description_frVector"
+ }
+ ],
+ "queryType": "semantic",
+ "queryLanguage": "en-us",
+ "semanticConfiguration": "my-semantic-config"
+}
+```
+
+Key points include:
+++ `search` specifies a full text search query.++ `vectors` for vector queries, which can be multiple, targeting multiple vector fields. If the embedding space includes multi-lingual content, vector queries can find the match with no language analyzers or translation required.++ `select` specifies which fields to return in results, which can be text fields that are human readable.++ `filters` can specify geospatial search or other include and exclude criteria, such as whether parking is included. The geospatial query in this example finds hotels within a 300-kilometer radius of Washington D.C.++ `facets` can be used to compute facet buckets over results that are returned from hybrid queries.++ `queryType=semantic` invokes semantic ranking, applying machine reading comprehension to surface more relevant search results.+
+Filters and facets target data structures within the index that are distinct from the inverted indexes used for full text search and the vector indexes used for vector search. As such, when filters and faceted operations execute, the search engine can apply the operational result to the hybrid search results in the response.
+
+Notice how there's no `orderby` in the query. Explicit sort orders override relevanced-ranked results, so if you want similarity and BM25 relevance, omit sorting in your query.
+
+A response from the above query might look like this:
+
+```http
+{
+ "@odata.count": 3,
+ "@search.facets": {
+ "Address/StateProvince": [
+ {
+ "count": 1,
+ "value": "NY"
+ },
+ {
+ "count": 1,
+ "value": "VA"
+ }
+ ]
+ },
+ "value": [
+ {
+ "@search.score": 0.03333333507180214,
+ "@search.rerankerScore": 2.5229012966156006,
+ "HotelId": "49",
+ "HotelName": "Old Carrabelle Hotel",
+ "Description": "Spacious rooms, glamorous suites and residences, rooftop pool, walking access to shopping, dining, entertainment and the city center.",
+ "Category": "Luxury",
+ "Address": {
+ "City": "Arlington",
+ "StateProvince": "VA"
+ }
+ },
+ {
+ "@search.score": 0.032522473484277725,
+ "@search.rerankerScore": 2.111117362976074,
+ "HotelId": "48",
+ "HotelName": "Nordick's Motel",
+ "Description": "Only 90 miles (about 2 hours) from the nation's capital and nearby most everything the historic valley has to offer. Hiking? Wine Tasting? Exploring the caverns? It's all nearby and we have specially priced packages to help make our B&B your home base for fun while visiting the valley.",
+ "Category": "Boutique",
+ "Address": {
+ "City": "Washington D.C.",
+ "StateProvince": null
+ }
+ }
+ ]
+}
+```
+
+## Benefits
+
+Hybrid search combines the strengths of vector search and keyword search. The advantage of vector search is finding information that's similar to your search query, even if there are no keyword matches in the inverted index. The advantage of keyword or full text search is precision, and the ability to apply semantic ranking that improves the quality of the initial results. Some scenarios, such as product codes, highly specialized jargon, dates, etc. can perform better with keyword search because it can identify exact matches.
+
+Benchmark testing on real-world and benchmark datasets indicates that hybrid retrieval with semantic ranking offers significant benefits in search relevance.
+
+## See also
+
+[Outperform vector search with hybrid retrieval and ranking (Tech blog)](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-outperforming-vector-search-with-hybrid/ba-p/3929167)
search Hybrid Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-ranking.md
+
+ Title: Hybrid search scoring (RRF)
+
+description: Describes the Reciprocal Rank Fusion (RRF) algorithm used to unify search scores from parallel queries in Azure Cognitive Search.
+++++ Last updated : 09/27/2023++
+# Relevance scoring in hybrid search using Reciprocal Rank Fusion (RRF)
+
+> [!IMPORTANT]
+> Hybrid search uses the [vector features](vector-search-overview.md) currently in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+For hybrid search scoring, Cognitive Search uses the Reciprocal Rank Fusion (RRF) algorithm. RRF combines the results of different search methods - such as vector search and full text search or multiple vector queries executing in parallel - to produce a single relevance score. RRF is based on the concept of *reciprocal rank*, which is the inverse of the rank of the first relevant document in a list of search results. 
+
+The goal of the technique is to take into account the position of the items in the original rankings, and give higher importance to items that are ranked higher in multiple lists. This can help improve the overall quality and reliability of the final ranking, making it more useful for the task of fusing multiple ordered search results.
+
+In Azure Cognitive Search, RRF is used whenever there are two or more queries that execute in parallel. Each query produces a ranked result set, and RRF is used to merge and homogenize the rankings into a single result set, returned in the query response.
+
+## How RRF ranking works
+
+RRF works by taking the search results from multiple methods, assigning a reciprocal rank score to each document in the results, and then combining the scores to create a new ranking. The concept is that documents appearing in the top positions across multiple search methods are likely to be more relevant and should be ranked higher in the combined result.
+
+Here's a simple explanation of the RRF process:
+
+1. Obtain ranked search results from multiple queries executing in parallel for full text search and vector search.
+
+1. Assign reciprocal rank scores for result in each of the ranked lists. RRF generates a new **`@search.score`** for each match in each result set. For each document in the search results, we assign a reciprocal rank score based on its position in the list. The score is calculated as `1/(rank + k)`, where `rank` is the position of the document in the list, and `k` is a constant, which was experimentally observed to perform best if it's set to a small value like 60. **Note that this `k` value is a constant in the RRF algorithm and entirely separate from the `k` that controls the number of nearest neighbors.**
+
+1. Combine scores. For each document, the engine sums the reciprocal rank scores obtained from each search system, producing a combined score for each document. 
+
+1. Rank documents based on combined scores and sort them. The resulting list is the fused ranking.
+
+Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
+
+### Parallel query execution
+
+RRF is used anytime there's more than one query execution. The following examples illustrate query patterns where parallel query execution occurs:
+++ A full text query, plus one vector query (simple hybrid scenario), equals two query executions.++ A full text query, plus one vector query targeting two vector fields, equals three query executions.++ A full text query, plus two vector queries targeting five vector fields, equals 11 query executions+
+## Scores in a hybrid search results
+
+Whenever results are ranked, **`@search.score`** property contains the value used to order the results. Scores are generated by ranking algorithms that vary for each method. Each algorithm has its own range and magnitude.
+
+The following chart identifies the scoring property returned on each match, algorithm, and range of scores for each relevance ranking algorithm.
+
+| Search method | Parameter | Scoring algorithm | Range |
+||--|-|-|
+| full-text search | `@search.score` | BM25 algorithm | No upper limit. |
+| vector search | `@search.score` | HNSW algorithm, using the similarity metric specified in the HNSW configuration. | 0.333 - 1.00 (Cosine), 0 to 1 for Euclidean and DotProduct. |
+| hybrid search | `@search.score` | RRF algorithm | Upper limit is only bounded by the number of queries being fused, with each query contributing a maximum of approximately 1 to the RRF score. |
+| semantic ranking | `@search.rerankerScore` | Semantic ranking | 1.00 - 4.00 |
+
+Semantic ranking doesn't participate in RRF. Its score (`@search.rerankerScore`) is always reported separately in the query response. Semantic ranking can rerank full text and hybrid search results, assuming those results include fields having semantically rich content.
+
+## Number of ranked results in a hybrid query response
+
+By default, if you aren't using pagination, the search engine returns the top 50 highest ranking matches for full text search, and it returns `k` matches for vector search. In a hybrid query, `top` determines the number of results in the response. Based on defaults, the top 50 highest ranked matches of the unified result set are returned. Full text search is subject to a maximum limit of 1,000 matches (see [API response limits](search-limits-quotas-capacity.md#api-response-limits)). Once 1,000 matches are found, the search engine no longer looks for more.
+
+You can use `top`, `skip`, and `next` for paginated results. Paging results is how you determine the number of results on each logical page and navigate through the full payload. For more information, see [How to work with search results](search-pagination-page-layout.md).
+
+## See also
+++ [Learn more about hybrid search](hybrid-search-overview.md)++ [Learn more about vector search](vector-search-overview.md)
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Title: BM25 relevance scoring description: Explains the concepts of BM25 relevance and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result.+ Previously updated : 09/25/2023 Last updated : 09/27/2023
-# BM25 relevance and scoring for full text search
+# Relevance scoring for full text search (BM25)
This article explains the BM25 relevance scoring algorithm used to compute search scores for [full text search](search-lucene-query-architecture.md). BM25 relevance is exclusive to full text search. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries aren't scored or ranked for relevance.
-In Azure Cognitive Search, you can configure algorithm parameters, and tune search relevance and boost search scores through these mechanisms:
+## Scoring algorithms used in full text search
+
+Azure Cognitive Search provides the following scoring algorithms for full text search:
+
+| Algorithm | Usage | Range |
+|--|-|-|
+| `BM25Similarity` | Fixed algorithm on all search services created after July 2020. You can configure this algorithm, but you can't switch to an older one (classic). | Unbounded. |
+|`ClassicSimilarity` | Present on older search services. You can [opt-in for BM25](index-ranking-similarity.md) and choose an algorithm on a per-index basis. | 0 < 1.00 |
+
+Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking results. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research.
+
+BM25 offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms. For more information, see [Configure the scoring algorithm](index-ranking-similarity.md).
+
+> [!NOTE]
+> If you're using a search service that was created before July 2020, the scoring algorithm is most likely the previous default, `ClassicSimilarity`, which you can upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
+
+The following video segment fast-forwards to an explanation of the generally available ranking algorithms used in Azure Cognitive Search. You can watch the full video for more background.
-+ Scoring algorithm configuration
-+ Scoring profiles
-+ [Semantic ranking](semantic-search-overview.md)
-+ Custom scoring logic enabled through the *featuresMode* parameter
+> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=322&end=643]
-## Relevance scoring
+## How BM25 ranking works
Relevance scoring refers to the computation of a search score (**@search.score**) that serves as an indicator of an item's relevance in the context of the current query. The range is unbounded. However, the higher the score, the more relevant the item.
Search scores can be repeated throughout a result set. When multiple hits have t
To break the tie among repeating scores, you can add an **$orderby** clause to first order by score, then order by another sortable field (for example, `$orderby=search.score() desc,Rating desc`). For more information, see [$orderby](search-query-odata-orderby.md).
+Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
+ > [!NOTE] > A `@search.score = 1` indicates an un-scored or un-ranked result set. The score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`, sometimes paired with filters, where the filter is the primary means for returning a match).
-## Scoring algorithms in Search
-
-Azure Cognitive Search provides the following scoring algorithms:
-
-| Algorithm | Usage | Range |
-|--|-|-|
-| `BM25Similarity` | Fixed algorithm on all search services created after July 2020. You can configure this algorithm, but you can't switch to an older one (classic). | Unbounded. |
-|`ClassicSimilarity` | Present on older search services. You can [opt-in for BM25](index-ranking-similarity.md) and choose an algorithm on a per-index basis. | 0 < 1.00 |
-
-Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking results. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research.
-
-BM25 offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms. For more information, see [Configure the scoring algorithm](index-ranking-similarity.md).
+## Scores in a text results
-> [!NOTE]
-> If you're using a search service that was created before July 2020, the scoring algorithm is most likely the previous default, `ClassicSimilarity`, which you can upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
+Whenever results are ranked, **`@search.score`** property contains the value used to order the results.
-The following video segment fast-forwards to an explanation of the generally available ranking algorithms used in Azure Cognitive Search. You can watch the full video for more background.
+The following table identifies the scoring property returned on each match, algorithm, and range.
-> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=322&end=643]
+| Search method | Parameter | Scoring algorithm | Range |
+||--|-|-|
+| full text search | `@search.score` | BM25 algorithm, using the [parameters specified in the index](index-ranking-similarity.md#set-bm25-parameters). | Unbounded. |
-## Score variation
+### Score variation
Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores aren't always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur.
Search scores convey general sense of relevance, reflecting the strength of matc
<a name="scoring-statistics"></a>
-## Scoring statistics and sticky sessions
+### Scoring statistics and sticky sessions
For scalability, Azure Cognitive Search distributes each index horizontally through a sharding process, which means that [portions of an index are physically separate](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards).
As long as the same `sessionId` is used, a best-effort attempt is made to target
> [!NOTE] > Reusing the same `sessionId` values repeatedly can interfere with the load balancing of the requests across replicas and adversely affect the performance of the search service. The value used as sessionId cannot start with a '_' character.
-## Scoring profiles
+## Relevance tuning
-You can customize the way different fields are ranked by defining a *scoring profile*. Scoring profiles provide criteria for boosting the search score of a match based on content characteristics. For example, you might want to boost matches based on their revenue potential, promote newer items, or perhaps boost items that have been in inventory too long.
+In Azure Cognitive Search, you can configure BM25 algorithm parameters, and tune search relevance and boost search scores through these mechanisms:
-A scoring profile is part of the index definition, composed of weighted fields, functions, and parameters. For more information about defining one, see [Scoring Profiles](index-add-scoring-profiles.md).
+| Approach | Implementation | Description |
+|-|-|-|
+| [Scoring algorithm configuration](index-ranking-similarity.md) | Search index | |
+| [Scoring profiles](index-add-scoring-profiles.md) | Search index | Provide criteria for boosting the search score of a match based on content characteristics. For example, you might want to boost matches based on their revenue potential, promote newer items, or perhaps boost items that have been in inventory too long. A scoring profile is part of the index definition, composed of weighted fields, functions, and parameters. You can update an existing index with scoring profile changes, without incurring an index rebuild.|
+| [Semantic ranking](semantic-search-overview.md) | Query request | Applies machine reading comprehension to search results, promoting more semantically relevant results to the top. |
+| [featuresMode parameter](#featuresmode-parameter-preview) | Query request | This parameter is mostly used for unpacking a score, but it can be used for in code that provides a [custom scoring solution](https://github.com/Azure-Samples/search-ranking-tutorial). |
<a name="featuresMode-param"></a>
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
There's no query type in Cognitive Search - not even semantic or vector search -
| [Filters](search-filters.md) and [facets](search-faceted-navigation.md) | Applies to text or numeric (non-vector) fields only. Reduces the search surface area based on inclusion or exclusion criteria. | Adds precision to your queries. | | [Semantic ranking](semantic-how-to-query-request.md) | Re-ranks a BM25 result set using semantic models. Produces short-form captions and answers that are useful as LLM inputs. | Easier than scoring profiles, and depending on your content, a more reliable technique for relevance tuning. | [Vector search](vector-search-how-to-query.md) | Query execution over vector fields for similarity search, where the query string is one or more vectors. | Vectors can represent all types of content, in any language. |
-| [Hybrid search](vector-search-ranking.md#hybrid-search) | Combines any or all of the above query techniques. Vector and non-vector queries execute in parallel and are returned in a unified result set. | The most significant gains in precision and recall are through hybrid queries. |
+| [Hybrid search](hybrid-search-overview.md) | Combines any or all of the above query techniques. Vector and non-vector queries execute in parallel and are returned in a unified result set. | The most significant gains in precision and recall are through hybrid queries. |
### Structure the query response
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
api-key: {{admin-api-key}}
### Cross-field vector search
-A cross-field vector query sends a single query across multiple vector fields in your search index. This query example looks for similarity in both "titleVector" and "contentVector" and displays scores using [Reciprocal Rank Fusion (RRF)](vector-search-ranking.md#reciprocal-rank-fusion-rrf-for-hybrid-queries):
+A cross-field vector query sends a single query across multiple vector fields in your search index. This query example looks for similarity in both "titleVector" and "contentVector" and displays scores using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md):
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
api-key: {{admin-api-key}}
### Multi-query vector search
-Multi-query vector search sends multiple queries across multiple vector fields in your search index. This query example looks for similarity in both `titleVector` and `contentVector`, but sends in two different query embeddings respectively. This scenario is ideal for multi-modal use cases where you want to search over a `textVector` field and an `imageVector` field. You can also use this scenario if you have different embedding models with different dimensions in your search index. This also displays scores using [Reciprocal Rank Fusion (RRF)](vector-search-ranking.md#reciprocal-rank-fusion-rrf-for-hybrid-queries).
+Multi-query vector search sends multiple queries across multiple vector fields in your search index. This query example looks for similarity in both `titleVector` and `contentVector`, but sends in two different query embeddings respectively. This scenario is ideal for multi-modal use cases where you want to search over a `textVector` field and an `imageVector` field. You can also use this scenario if you have different embedding models with different dimensions in your search index. This also displays scores using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md).
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
api-key: {{admin-api-key}}
Hybrid search consists of keyword queries and vector queries in a single search request.
-The response includes the top 10 ordered by search score. Both vector queries and free text queries are assigned a search score according to the scoring or similarity functions configured on the fields (BM25 for text fields). The scores are merged using [Reciprocal Rank Fusion (RRF)](vector-search-ranking.md#reciprocal-rank-fusion-rrf-for-hybrid-queries) to weight each document with the inverse of its position in the ranked result set.
+The response includes the top 10 ordered by search score. Both vector queries and free text queries are assigned a search score according to the scoring or similarity functions configured on the fields (BM25 for text fields). The scores are merged using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) to weight each document with the inverse of its position in the ranked result set.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
Title: Full text query and indexing engine architecture (Lucene)
+ Title: Full text search
-description: Explore Lucene query processing and document retrieval concepts for full text search, as related to Azure Cognitive Search.
+description: Describes concepts and architecture of query processing and document retrieval for full text search, as implemented Azure Cognitive Search.
Previously updated : 10/03/2022 Last updated : 09/27/2023 # Full text search in Azure Cognitive Search
-This article is for developers who need a deeper understanding of how Apache Lucene full text search works in Azure Cognitive Search. For text queries, Azure Cognitive Search will seamlessly deliver expected results in most scenarios, but occasionally you might get a result that seems "off" somehow. In these situations, having a background in the four stages of Lucene query execution (query parsing, lexical analysis, document matching, scoring) can help you identify specific changes to query parameters or index configuration that will deliver the desired outcome.
+This article is for developers who need a deeper understanding of how full text search works in Azure Cognitive Search. For text queries, Azure Cognitive Search seamlessly delivers expected results in most scenarios, but occasionally you might get a result that seems "off" somehow. In these situations, having a background in the four stages of Lucene query execution (query parsing, lexical analysis, document matching, scoring) can help you identify specific changes to query parameters or index configuration that produce the desired outcome.
> [!NOTE] > Azure Cognitive Search uses [Apache Lucene](https://lucene.apache.org/) for full text search, but Lucene integration is not exhaustive. We selectively expose and extend Lucene functionality to enable the scenarios important to Azure Cognitive Search.
Query execution has four stages:
1. Document retrieval 1. Scoring
-A full text search query starts with parsing the query text to extract search terms and operators. There are two parsers so that you can choose between speed and complexity. An analysis phase is next, where individual query terms are sometimes broken down and reconstituted into new forms to cast a broader net over what could be considered as a potential match. The search engine then scans the index to find documents with matching terms and scores each match. A result set is then sorted by a relevance score assigned to each individual matching document. Those at the top of the ranked list are returned to the calling application.
+A full text search query starts with parsing the query text to extract search terms and operators. There are two parsers so that you can choose between speed and complexity. An analysis phase is next, where individual query terms are sometimes broken down and reconstituted into new forms. This step helps to cast a broader net over what could be considered as a potential match. The search engine then scans the index to find documents with matching terms and scores each match. A result set is then sorted by a relevance score assigned to each individual matching document. Those at the top of the ranked list are returned to the calling application.
The diagram below illustrates the components used to process a search request.
The query parser separates operators (such as `*` and `+` in the example) from s
+ *phrase query* for quoted terms (like ocean view) + *prefix query* for terms followed by a prefix operator `*` (like air-condition)
-For a full list of supported query types see [Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search)
+For a full list of supported query types, see [Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search)
Operators associated with a subquery determine whether the query "must be" or "should be" satisfied in order for a document to be considered a match. For example, `+"Ocean view"` is "must" due to the `+` operator.
All of these operations tend to erase differences between the text input provide
In our example, prior to analysis, the initial query tree has the term "Spacious," with an uppercase "S" and a comma that the query parser interprets as a part of the query term (a comma isn't considered a query language operator).
-When the default analyzer processes the term, it will lowercase "ocean view" and "spacious", and remove the comma character. The modified query tree will look as follows:
+When the default analyzer processes the term, it will lowercase "ocean view" and "spacious", and remove the comma character. The modified query tree looks like:
![Conceptual diagram of a boolean query with analyzed terms.][4] ### Testing analyzer behaviors
-The behavior of an analyzer can be tested using the [Analyze API](/rest/api/searchservice/test-analyzer). Provide the text you want to analyze to see what terms given analyzer will generate. For example, to see how the standard analyzer would process the text "air-condition", you can issue the following request:
+The behavior of an analyzer can be tested using the [Analyze API](/rest/api/searchservice/test-analyzer). Provide the text you want to analyze to see what terms given analyzer generates. For example, to see how the standard analyzer would process the text "air-condition", you can issue the following request:
```json {
All indexes in Azure Cognitive Search are automatically split into multiple shar
This means a relevance score *could* be different for identical documents if they reside on different shards. Fortunately, such differences tend to disappear as the number of documents in the index grows due to more even term distribution. ItΓÇÖs not possible to assume on which shard any given document will be placed. However, assuming a document key doesn't change, it will always be assigned to the same shard.
-In general, document score isn't the best attribute for ordering documents if order stability is important. For example, given two documents with an identical score, there's no guarantee which one appears first in subsequent runs of the same query. Document score should only give a general sense of document relevance relative to other documents in the results set.
+In general, document score isn't the best attribute for ordering documents if order stability is important. For example, given two documents with an identical score, there's no guarantee that one appears first in subsequent runs of the same query. Document score should only give a general sense of document relevance relative to other documents in the results set.
## Conclusion
-The success of commercial search engines has raised expectations for full text search over private data. For almost any kind of search experience, we now expect the engine to understand our intent, even when terms are misspelled or incomplete. We might even expect matches based on near equivalent terms or synonyms that we never actually specified.
+The success of commercial search engines has raised expectations for full text search over private data. For almost any kind of search experience, we now expect the engine to understand our intent, even when terms are misspelled or incomplete. We might even expect matches based on near equivalent terms or synonyms that we never specified.
From a technical standpoint, full text search is highly complex, requiring sophisticated linguistic analysis and a systematic approach to processing in ways that distill, expand, and transform query terms to deliver a relevant result. Given the inherent complexities, there are many factors that can affect the outcome of a query. For this reason, investing the time to understand the mechanics of full text search offers tangible benefits when trying to work through unexpected results.
search Search Query Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-create.md
Title: Full-text query
+ Title: Full-text query how-to
description: Learn how to construct a query request for full text search in Azure Cognitive Search.
Last updated 09/25/2023
-# Create a full-text query in Azure Cognitive Search
+# How to create a full-text query in Azure Cognitive Search
If you're building a query for [full text search](search-lucene-query-architecture.md), this article provides steps for setting up the request. It also introduces a query structure, and explains how field attributes and linguistic analyzers can impact query outcomes.
If you're building a query for [full text search](search-lucene-query-architectu
In Azure Cognitive Search, a query is a read-only request against the docs collection of a single search index.
-A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition. For example, `searchFields` scopes query execution to specific fields, `select` specifies which fields are returned in results, and `count` returns the number of matches found in the index.
+A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition to the request. For example, `searchFields` scopes query execution to specific fields, `select` specifies which fields are returned in results, and `count` returns the number of matches found in the index.
The following [Search Documents REST API](/rest/api/searchservice/search-documents) call illustrates a query request using the aforementioned parameters.
Notice that you can change the REST API version if you require search behaviors
[Postman app](https://www.postman.com/downloads/) is useful for working with the REST APIs, such as [Search Documents (REST)](/rest/api/searchservice/search-documents).
-Start with [Create a search index using REST and Postman](search-get-started-rest.md) for step-by-step instructions for setting up requests.
+[Quickstart: Create a search index using REST and Postman](search-get-started-rest.md) has step-by-step instructions for setting up requests.
The following example calls the REST API for full text search:
search Search Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-overview.md
Geospatial search matches on a location's latitude and longitude coordinates for
+ Verify the incoming documents include the appropriate coordinates. + After indexing is complete, build a query that uses a filter and a [geo-spatial function](search-query-odata-geo-spatial-functions.md).
-For more information and an example, see [Geospatial search example](search-query-simple-examples.md#example-6-geospatial-search).
+Geospatial search uses kilometers for distance. Coordinates are specified in this format: `(longitude, latitude`).
+
+Here's an example of a filter for geospatial search. This filter finds other `Location` fields in the search index that have coordinates within a 300-kilometer radius of the geography point (in this example, Washington D.C.). It returns address information in the result, and includes an optional `facets` clause for self-navigation based on location.
+
+```http
+POST https://{{searchServiceName}}.search.windows.net/indexes/hotels-vector-quickstart/docs/search?api-version=2023-07-01-Preview
+{
+ "count": true,
+ "search": "*",
+ "filter": "geo.distance(Location, geography'POINT(-77.03241 38.90166)') le 300",
+ "facets": [ "Address/StateProvince"],
+ "select": "HotelId, HotelName, Address/StreetAddress, Address/City, Address/StateProvince",
+ "top": 7
+}
+```
+
+For more information and examples, see [Geospatial search example](search-query-simple-examples.md#example-6-geospatial-search).
## Document look-up
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognit
## Prerequisites
-+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields fails on creation. In this situation, a new service must be created.
++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there is a small subset which won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. + Pre-existing vector embeddings in your source documents. Cognitive Search doesn't generate vectors. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization. For more information, see [Create and use embeddings for search queries and documents](vector-search-how-to-generate-embeddings.md).
Although you can add a field to an index, there's no portal (Import data wizard)
+ Name the configuration. The name must be unique within the index. + "hnsw" is the Approximate Nearest Neighbors (ANN) algorithm used to create the proximity graph during indexing. Currently, only Hierarchical Navigable Small World (HNSW) is supported.
- + "Bi-directional link count" default is 4. The range is 2 to 100. Lower values should return less noise in the results.
- + "efConstruction" default is 400. It's the number of nearest neighbors used during indexing.
- + "efSearch default is 500. It's the number of nearest neighbors used during search.
+ + "Bi-directional link count" default is 4. The range is 4 to 10. Lower values should return less noise in the results.
+ + "efConstruction" default is 400. The range is 100 to 1,000. It's the number of nearest neighbors used during indexing.
+ + "efSearch default is 500. The range is 100 to 1,000. It's the number of nearest neighbors used during search.
+ "Similarity metric" should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric of the embedding model. Supported values are `cosine`, `dotProduct`, `euclidean`. If you're familiar with HNSW parameters, you might be wondering about how to set the "k" number of nearest neighbors to return in the result. In Cognitive Search, that value is set on the [query request](vector-search-how-to-query.md).
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Title: Query vector data in a search index
+ Title: Vector query how-to
-description: Build queries for vector-only fields and hybrid search scenarios that combine vectors with semantic and standard search syntax.
+description: Learn how to build queries for vector search.
Last updated 08/10/2023
-# Query vector data in a search index
+# How to query vector data in a search index
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognit
## Prerequisites
-+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there is a small subset which won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. + A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-create-index.md).
All results are returned in plain text, including vectors. If you use Search Exp
If you aren't sure whether your search index already has vector fields, look for:
-+ A `vectorSearch` algorithm configuration embedded in the index schema.
++ A non-empty `vectorSearch` property containing algorithms and other vector-related configurations embedded in the index schema. + In the fields collection, look for fields of type `Collection(Edm.Single)`, with a `dimensions` attribute and a `vectorSearchConfiguration` set to the name of the `vectorSearch` algorithm configuration used by the field.
You can also send an empty query (`search=*`) against the index. If the vector f
## Convert query input into a vector
-To query a vector field, the query itself must be a vector. To convert a text query string provided by a user into a vector representation, your application must call an embedding library that provides this capability. Use the same embedding library that you used to generate embeddings in the source documents.
+To query a vector field, the query itself must be a vector. To convert a text query string provided by a user into a vector representation, your application must call an embedding library or API endpoint that provides this capability. **Use the same embedding that you used to generate embeddings in the source documents.**
You can find multiple instances of query string conversion in the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/) repository for each of the Azure SDKs.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Title: Vector search
-description: Describes concepts, scenarios, and availability of the vector search feature in Cognitive Search.
+description: Describes concepts, scenarios, and availability of the vector search feature in Azure Cognitive Search.
Previously updated : 09/21/2023 Last updated : 09/27/2023
-# Vector search within Azure Cognitive Search
+# Vector search in Azure Cognitive Search
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
Scenarios for vector search include:
+ **Multi-lingual search**. Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in.
-+ **Hybrid search**. Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing.
++ [**Hybrid search**](hybrid-search-overview.md). Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. + **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine processes the filter after the vector query executes, trimming search results from query response.
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
Title: Vector query execution and scoring
+ Title: Vector search scoring
-description: Explains the concepts behind vector query execution, including how matches are found in vector space and ranked in search results.
+description: Explains the concepts behind vector relevance scoring, including how matches are found in vector space and ranked in search results.
Previously updated : 08/31/2023 Last updated : 09/27/2023
-# Vector query execution and scoring in Azure Cognitive Search
+# Relevance scoring in vector search
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
-This article is for developers who need a deeper understanding of vector query execution and ranking in Azure Cognitive Search.
+This article is for developers who need a deeper understanding of relevance scoring for vector queries in Azure Cognitive Search.
-## Vector similarity
+## Scoring algorithms used in vector search
-In a vector query, the search query is a vector, as opposed to a string in full-text queries. Documents that match the vector query are ranked using the vector similarity algorithm configured on the vector field defined in the index. A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be returned in the results.
+Hierarchical Navigable Small World (HNSW) is an algorithm used for efficient [approximate nearest neighbor (ANN)](vector-search-overview.md#approximate-nearest-neighbors) search in high-dimensional spaces. It organizes data points into a hierarchical graph structure that enables fast neighbor queries by navigating through the graph while maintaining a balance between search accuracy and computational efficiency.
-> [!NOTE]
-> Full-text search queries could return fewer than the requested number of results if there are insufficient matches, but vector search always return up to `k` matches as long as there are enough documents in the index. This is because with vector search, similarity is relative to the input query vector, not absolute. Less relevant results have a worse similarity score, but they are still the "nearest" vectors if there isn't anything closer. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low.
+HNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. You can create multiple configurations if you need optimizations for specific scenarios, but only one configuration can be specified on each vector field.
-In a typical application, the input value within a query request would be fed into the same machine learning model that generated the embedding space for the vector index. This model would output a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the nearest vectors and returning the associated documents as the search result.
+Vector search algorithms are specified in the json path `vectorSearch.algorithmConfigurations` in a search index, and then specified on the field definition (also in the index):
-For example, if a query request is about dogs, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about dogs. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant.
+- [Create a vector index](vector-search-how-to-create-index.md)
-### Similarity metrics used to measure nearness
+Because many algorithm configuration parameters are used to initialize the vector index during index creation, they're immutable parameters and can't be changed once the index is built. There's a subset of query-time parameters that may be modified.
-A similarity metric measures the distance between neighboring vectors. Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are summarized in the following table.
+## How HNSW ranking works
+
+Vector queries execute against an embedding space consisting of vectors generated from the same embedding model. In a typical application, the input value within a query request is fed into the same machine learning model that generated embeddings in the vector index. The output is a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the vectors that are closest to the query vector, and returning the associated documents as the search result.
+
+For example, if a query request is about hotels, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about hotels. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant.
+
+### Indexing vectors with the HNSW algorithm
+
+The goal of indexing a new vector into an HNSW graph is to add it to the graph structure in a manner that allows for efficient nearest neighbor search. The following steps summarize the process:
+
+1. Initialization: Start with an empty HNSW graph, or the existing HNSW graph if it's not a new index.
+
+1. Entry point: This is the top-level of the hierarchical graph and serves as the starting point for indexing.
+
+1. Adding to the graph: Different hierarchical levels represent different granularities of the graph, with higher levels being more global, and lower levels being more granular. Each node in the graph represents a vector point.
+
+ - Each node is connected to up to `m` neighbors that are nearby. This is the `m` parameter.
+
+ - The number of data points that considered as candidate connections is governed by the `efConstruction` parameter. This dynamic list forms the set of closest points in the existing graph for the algorithm to consider. Higher `efConstruction` values result in more nodes being considered, which often leads to denser local neighborhoods for each vector.
+
+ - These connections use the configured similarity `metric` to determine distance. Some connections are "long-distance" connections that connect across different hierarchical levels, creating shortcuts in the graph that enhance search efficiency.
+
+1. Graph pruning and optimization: This may be performed after indexing all vectors to improve navigability and efficiency of the HNSW graph.
+
+### Retrieving vectors with the HNSW algorithm
+
+In the HNSW algorithm, a vector query search operation is executed by navigating through this hierarchical graph structure. The following summarize the steps in the process:
+
+1. Initialization: The algorithm initiates the search at the top-level of the hierarchical graph. This entry point contains the set of vectors that serve as starting points for search.
+
+1. Traversal: Next, it traverses the graph level by level, navigating from the top-level to lower levels, selecting candidate nodes that are closer to the query vector based on the configured distance metric, such as cosine similarity.
+
+1. Pruning: To improve efficiency, the algorithm prunes the search space by only considering nodes that are likely to contain nearest neighbors. This is achieved by maintaining a priority queue of potential candidates and updating it as the search progresses. The length of this queue is configured by the parameter `efSearch`.
+
+1. Refinement: As the algorithm moves to lower, more granular levels, HNSW considers more neighbors near the query, which allows the candidate set of vectors to be refined, improving accuracy.
+
+1. Completion: The search completes when the desired number of nearest neighbors have been identified, or when other stopping criteria are met. This desired number of nearest neighbors is governed by the query-time parameter `k`.
+
+Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
+
+## Similarity metrics used to measure nearness
+
+A similarity metric measures the distance between neighboring vectors. Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are listed in the following table.
| Metric | Description | |--|-|
-| `cosine` | Calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity). |
+| `cosine` | Calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity), so if you're using Azure OpenAI, specify `cosine` in the vector configuration.|
| `euclidean` | Calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors. | | `dotProduct` | Calculates the products of vectors' magnitudes and the angle between them. | For normalized embedding spaces, `dotProduct` is equivalent to the `cosine` similarity, but is more efficient.
-## Hybrid search
+If you're using the `cosine` metric, it's important to note that the calculated `@search.score` isn't the cosine value between the query vector and the document vectors. Instead, Cognitive Search applies transformations such that the score function is monotonically decreasing, meaning score values will always decrease in value as the similarity becomes worse. This transformation ensures that search scores are usable for ranking purposes.
-By performing similarity searches over vector representations of your data, you can find information that's similar to your search query, even if the search terms don't match up perfectly to the indexed content. In practice, we often need to expand lexical matches with semantic matches to guarantee good recall. The notion of composing term queries with vector queries is called *hybrid search*.
+There are some nuances with similarity scores:
-In Azure Cognitive Search, embeddings are indexed alongside textual and numerical fields allowing you to issue hybrid term and vector queries and take advantage of existing functionalities like filtering, faceting, sorting, scoring profiles, and [semantic search](semantic-search-overview.md) in a single search request.
-Hybrid search combines results from both term and vector queries, which use different ranking functions such as BM25 and cosine similarity. To present these results in a single ranked list, a method of merging the ranked result lists is needed.
+- Cosine similarity is defined as the cosine of the angle between two vectors.
+- Cosine distance is defined as `1 - cosine_similarity`.
-## Reciprocal Rank Fusion (RRF) for hybrid queries
+To create a monotonically decreasing function, the `@search.score` is defined as `1 / (1 + cosine_distance)`.
-For hybrid search scoring, Cognitive Search uses Reciprocal Rank Fusion (RRF). In information retrieval, RRF combines the results of different search methods to produce a single, more accurate and relevant result. Here, a search method refers to methods such as vector search and full-text search. RRF is based on the concept of reciprocal rank, which is the inverse of the rank of the first relevant document in a list of search results. 
+Developers who need a cosine value instead of the synthetic value can use a formula to convert the search score back to cosine distance:
-At a basic level, RRF works by taking the search results from multiple methods, assigning a reciprocal rank score to each document in the results, and then combining these scores to create a new ranking. The main idea behind this method is that documents appearing in the top positions across multiple search methods are likely to be more relevant and should be ranked higher in the combined result.
+```csharp
+double ScoreToSimilarity(double score)
+{
+ double cosineDistance = (1 - score) / score;
+ return -cosineDistance + 1;
+}
+```
-Here's a simple explanation of the RRF process:
+Having the original cosine value can be useful in custom solutions that set up thresholds to trim results of low quality results.
-1. Obtain search results from multiple methods.
+## Scores in a vector search results
- In the context of Azure Cognitive Search, this is vector search and full-text search, with or without semantic ranking. We search for a specific query using both methods and get parallel ranked lists of documents as results. Each method has a ranking methodology. With BM25 ranking on full-text search, rank is by **`@search.score`**. With semantic reranking over BM25 ranked results, rank is by **`@search.rerankerScore`**. With similarity search for vector queries, the similarity score is also articulated as **`@search.score`** within its result set.
+Whenever results are ranked, **`@search.score`** property contains the value used to order the results.
-1. Assign reciprocal rank scores for result in each of the ranked lists. A new **`@search.score`** property is generated by the RFF algorithm for each match in each result set. For each document in the search results, we assign a reciprocal rank score based on its position in the list. The score is calculated as `1/(rank + k)`, where `rank` is the position of the document in the list, and `k` is a constant, which was experimentally observed to perform best if it's set to a small value like 60.
+The following table identifies the scoring property returned on each match, algorithm, and range.
-1. Combine scores. For each document, we sum the reciprocal rank scores obtained from each search system. This gives us a combined score for each document. 
+| Search method | Parameter | Scoring algorithm | Range |
+||--|-|-|
+| vector search | `@search.score` | HNSW algorithm, using the similarity metric specified in the HNSW configuration. | 0.333 - 1.00 (Cosine) |
-1. Rank documents based on combined scores. Finally, we sort the documents based on their combined scores, and the resulting list is the fused ranking. A
+## Number of ranked results in a vector query response
-Whenever results are ranked, **`@search.score`** property contains the value used to order the results. Scores are generated by ranking algorithms that vary for each method and aren't comparable.
+A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be found in vector space and returned in the results. If `k` is larger than the number of documents in the index, then the number of documents determines the upper limit of what can be returned.
-| Search method | @search.score algorithm |
-||-|
-| full-text search | **`@search.score`** is produced by the BM25 algorithm and its values are unbounded. |
-| vector similarity search | **`@search.score`** is produced by the HNSW algorithm, plus the similarity metric specified in the configuration. |
-| hybrid search | **`@search.score`** is produced by the RFF algorithm that merges results from parallel query execution, such as vector and full-text search. |
-| hybrid search with semantic reranking | **`@search.score`** is the RRF score from your initial retrieval, but you'll also see the **@search.rerankerScore** which is from the reranking model powered by Bing, which ranges from 0-4.
-
-By default, if you aren't using pagination, Cognitive Search returns the top 50 highest ranking matches for full text search, and it returns `k` matches for vector search. In a hybrid query, the top 50 highest ranked matches of the unified result set are returned. You can use `$top`, `$skip`, and `$next` for paginated results. For more information, see [How to work with search results](search-pagination-page-layout.md).
+The search engine always returns `k` number of matches, as long as there are enough documents in the index. If you're familiar with full text search, you know to expect zero results if the index doesn't contain a term or phrase. However, in vector search, similarity is relative to the input query vector, not absolute. It's possible to get positive results for a nonsensical or off-topic query. Less relevant results have a worse similarity score, but they're still the "nearest" vectors if there isn't anything closer. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low. A [hybrid approach](hybrid-search-overview.md) that includes full text search can mitigate this problem.
## Next steps + [Try the quickstart](search-get-started-vector.md) + [Learn more about embeddings](vector-search-how-to-generate-embeddings.md) + [Learn more about data chunking](vector-search-how-to-chunk-documents.md)-
security Shared Responsibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility.md
na Previously updated : 12/05/2022 Last updated : 09/28/2023 # Shared responsibility in the cloud
-As you consider and evaluate public cloud services, itΓÇÖs critical to understand the shared responsibility model and which security tasks are handled by the cloud provider and which tasks are handled by you. The workload responsibilities vary depending on whether the workload is hosted on Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), or in an on-premises datacenter
+As you consider and evaluate public cloud services, it's critical to understand the shared responsibility model and which security tasks the cloud provider handles and which tasks you handle. The workload responsibilities vary depending on whether the workload is hosted on Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), or in an on-premises datacenter
## Division of responsibility In an on-premises datacenter, you own the whole stack. As you move to the cloud some responsibilities transfer to Microsoft. The following diagram illustrates the areas of responsibility between you and Microsoft, according to the type of deployment of your stack. :::image type="content" source="media/shared-responsibility/shared-responsibility.svg" alt-text="Diagram showing responsibility zones." border="false":::
-For all cloud deployment types, you own your data and identities. You are responsible for protecting the security of your data and identities, on-premises resources, and the cloud components you control (which varies by service type).
+For all cloud deployment types, you own your data and identities. You're responsible for protecting the security of your data and identities, on-premises resources, and the cloud components you control. Cloud components you control vary by service type.
-Regardless of the type of deployment, the following responsibilities are always retained by you:
+Regardless of the type of deployment, you always retain the following responsibilities:
- Data - Endpoints
Regardless of the type of deployment, the following responsibilities are always
## Cloud security advantages The cloud offers significant advantages for solving long standing information security challenges. In an on-premises environment, organizations likely have unmet responsibilities and limited resources available to invest in security, which creates an environment where attackers are able to exploit vulnerabilities at all layers.
-The following diagram shows a traditional approach where many security responsibilities are unmet due to limited resources. In the cloud-enabled approach, you are able to shift day to day security responsibilities to your cloud provider and reallocate your resources.
+The following diagram shows a traditional approach where many security responsibilities are unmet due to limited resources. In the cloud-enabled approach, you're able to shift day to day security responsibilities to your cloud provider and reallocate your resources.
:::image type="content" source="media/shared-responsibility/cloud-enabled-security.svg" alt-text="Diagram showing security advantages of cloud era." border="false":::
-In the cloud-enabled approach, you are also able to leverage cloud-based security capabilities for more effectiveness and use cloud intelligence to improve your threat detection and response time. By shifting responsibilities to the cloud provider, organizations can get more security coverage, which enables them to reallocate security resources and budget to other business priorities.
+In the cloud-enabled approach, you're also able to apply cloud-based security capabilities for more effectiveness and use cloud intelligence to improve your threat detection and response time. By shifting responsibilities to the cloud provider, organizations can get more security coverage, which enables them to reallocate security resources and budget to other business priorities.
-## Next steps
+## Next step
Learn more about shared responsibility and strategies to improve your security posture in the Well-Architected Framework's [overview of the security pillar](/azure/architecture/framework/security/overview).-
-For more information on the division of responsibility between you and Microsoft in a SaaS, PaaS, and IaaS deployment, see [Shared responsibilities for cloud computing](https://azure.microsoft.com/resources/shared-responsibility-for-cloud-computing/).
sentinel Connect Mdti Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-mdti-data-connector.md
Bring high fidelity indicators of compromise (IOC) generated by Microsoft Defend
> ## Prerequisites-- In order to install, update and delete standalone content or solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
- To configure this data connector, you must have read and write permissions to the Microsoft Sentinel workspace. ## Install the Threat Intelligence solution in Microsoft Sentinel
sentinel Connect Rest Api Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-rest-api-template.md
It may take up to 20 minutes before your logs start to appear in Log Analytics.
## Next steps
-In this document, you learned how to connect external data sources to the Microsoft Sentinel Data Collector API. To take full advantage of the capabilities built in to these data connectors, select the **Next steps** tab on the data connector page. There you'll find some ready-made sample queries, workbooks, and analytics rule templates so you can get started finding useful information.
+In this document, you learned how to connect external data sources to the Microsoft Sentinel Data Collector API.
To learn more about Microsoft Sentinel, see the following articles:
sentinel Connect Threat Intelligence Taxii https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-taxii.md
To import STIX formatted threat indicators to Microsoft Sentinel from a TAXII se
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Microsoft Sentinel, and specifically about the [TAXII threat intelligence feeds](threat-intelligence-integration.md#taxii-threat-intelligence-feeds) that can be integrated with Microsoft Sentinel. ## Prerequisites -- In order to install, update and delete standalone content or solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators. - You must have a TAXII 2.0 or TAXII 2.1 **API Root URI** and **Collection ID**.
sentinel Connect Threat Intelligence Tip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-tip.md
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Mic
## Prerequisites -- In order to install, update and delete standalone content or solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
- You must have either the **Global administrator** or **Security administrator** Azure AD roles in order to grant permissions to your TIP product or to any other custom application that uses direct integration with the Microsoft Graph Security tiIndicators API. - You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.
sentinel Connect Threat Intelligence Upload Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-upload-api.md
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Mic
**See also**: [Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md) ## Prerequisites -- In order to install, update and delete standalone content or solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators. - You must be able to register an Azure Active Directory (Azure AD) application. - The Azure AD application must be granted the Microsoft Sentinel contributor role at the workspace level.
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/quickstart-onboard.md
Microsoft Sentinel comes with many data connectors for Microsoft products such a
- To enable Microsoft Sentinel, you need **contributor** permissions to the subscription in which the Microsoft Sentinel workspace resides.
- - To use Microsoft Sentinel, you need either **contributor** or **reader** permissions on the resource group that the workspace belongs to.
- - To install or manage solutions in the content hub, you need the **Template Spec Contributor** role on the resource group that the workspace belongs to.
+ - To use Microsoft Sentinel, you need either **Microsoft Sentinel Contributor** or **Microsoft Sentinel Reader** permissions on the resource group that the workspace belongs to.
+ - To install or manage solutions in the content hub, you need the **Microsoft Sentinel Contributor** role on the resource group that the workspace belongs to.
- **Microsoft Sentinel is a paid service**. Review the [pricing options](https://go.microsoft.com/fwlink/?linkid=2104058) and the [Microsoft Sentinel pricing page](https://azure.microsoft.com/pricing/details/azure-sentinel/).
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Title: Roles and permissions in Microsoft Sentinel
description: Learn how Microsoft Sentinel assigns permissions to users using Azure role-based access control, and identify the allowed actions for each role. Previously updated : 06/06/2023 Last updated : 09/29/2023
Use Azure RBAC to create and assign roles within your security operations team t
- [**Microsoft Sentinel Responder**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) can, in addition to the above, manage incidents (assign, dismiss, etc.). -- [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) can, in addition to the above, create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.
+- [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) can, in addition to the above, install and update solutions from content hub, create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.
- [**Microsoft Sentinel Playbook Operator**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) can list, view, and manually run playbooks.
Users with particular job requirements may need to be assigned other roles or sp
- **Install and manage out-of-the-box content**
- Find packaged solutions for end-to-end products or standalone content from the content hub in Microsoft Sentinel. To install and manage content from the content hub, assign the [**Template Spec Contributor**](../role-based-access-control/built-in-roles.md#template-spec-contributor) role at the resource group level.
+ Find packaged solutions for end-to-end products or standalone content from the content hub in Microsoft Sentinel. To install and manage content from the content hub, assign the **Microsoft Sentinel Contributor** role at the resource group level. For some solutions, the [**Template Spec Contributor**](../role-based-access-control/built-in-roles.md#template-spec-contributor) role is still required.
- **Automate responses to threats with playbooks**
This table summarizes the Microsoft Sentinel roles and their allowed actions in
|||||||--| | Microsoft Sentinel Reader | -- | -- | --[*](#workbooks) | -- | &#10003; | --| | Microsoft Sentinel Responder | -- | -- | --[*](#workbooks) | &#10003; | &#10003; | --|
-| Microsoft Sentinel Contributor | -- | -- | &#10003; | &#10003; | &#10003; | --|
+| Microsoft Sentinel Contributor | -- | -- | &#10003; | &#10003; | &#10003; | &#10003;|
| Microsoft Sentinel Playbook Operator | &#10003; | -- | -- | -- | -- | --| | Logic App Contributor | &#10003; | &#10003; | -- | -- | -- |-- |
-| Template Spec Contributor | -- | -- | -- | -- | -- |&#10003; |
+| Template Spec Contributor | -- | -- | -- | -- | -- |&#10003;[**](#content-hub) |
<a name=workbooks></a>* Users with these roles can create and delete workbooks with the [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) role. Learn about [Other roles and permissions](#other-roles-and-permissions).
+<a name=content-hub></a>** The requirement for the Template Spec Contributor role to install and manage content from content hub is still required for some edge cases in addition to Microsoft Sentinel Contributor.
+ Review the [role recommendations](#role-and-permissions-recommendations) for which roles to assign to which users in your SOC. ## Custom roles and advanced Azure RBAC
After understanding how roles and permissions work in Microsoft Sentinel, you ca
| | | | | | **Security analysts** | [Microsoft Sentinel Responder](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) | Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. | | | [Microsoft Sentinel Playbook Operator](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run playbooks. |
-|**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. |
+|**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.<br><br>Install and update solutions from content hub. |
| | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. | ||[Template Spec Contributor](../role-based-access-control/built-in-roles.md#template-spec-contributor)|Microsoft Sentinel's resource group |Install and manage content from the content hub.| | **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
Title: Discover and deploy Microsoft Sentinel out-of-the-box content from Conten
description: Learn how to find and deploy Sentinel packaged solutions containing data connectors, analytics rules, hunting queries, workbooks, and other content. Previously updated : 06/22/2023 Last updated : 09/29/2023
If you're a partner who wants to create your own solution, see the [Microsoft Se
## Prerequisites
-In order to install, update and delete standalone content or solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level. In addition, the **Template Spec Contributor** role is still required for some edge cases. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
This is in addition to Sentinel specific roles. For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
AzureDiagnostics;
"Action": "Accept Connection", "Reason": "IP is accepted by IPAddress filter.", "Count": 1,
- "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
"Category": "ServiceBusVNetConnectionEvent" } ```
Resource specific table entry:
"Action": "Accept Connection", "Message": "IP is accepted by IPAddress filter.", "Count": 1,
- "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
"Provider" : "SERVICEBUS", "Type": "AZMSVNetConnectionEvents" }
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
To deploy sample scripts to your Automation account, select the **Deploy to Azur
[![Deploy to Azure](https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/c4803408-340e-49e3-9a1f-0ed3f689813d.png)](https://aka.ms/asr-automationrunbooks-deploy)
-This video provides another example. It demonstrates how to recover a two-tier WordPress application to Azure:
- ## Next steps -- Learn about an [Azure Automation Run As account](../automation/manage-runas-account.md)-- Review [Azure Automation sample scripts](https://gallery.technet.microsoft.com/scriptcenter/site/search?f%5B0%5D.Type=User&f%5B0%5D.Value=SC%20Automation%20Product%20Team&f%5B0%5D.Text=SC%20Automation%20Product%20Team).
+- Learn about:
+ - [Azure Automation Run As account](../automation/manage-runas-account.md).
+ - [Running failovers](site-recovery-failover.md)
+- Review:
+ - [Azure Automation sample scripts](https://gallery.technet.microsoft.com/scriptcenter/site/search?f%5B0%5D.Type=User&f%5B0%5D.Value=SC%20Automation%20Product%20Team&f%5B0%5D.Text=SC%20Automation%20Product%20Team).
+ - [A few tasks you might want to run during an Azure Site Recovery DR](https://github.com/WernerRall147/RallTheory/tree/main/AzureSiteRecoveryDRRunbooks).
-- Also Review [A few tasks you might want to run during an Azure Site Recovery DR](https://github.com/WernerRall147/RallTheory/tree/main/AzureSiteRecoveryDRRunbooks)-- [Learn more](site-recovery-failover.md) about running failovers.
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Title: Configure anonymous public read access for containers and blobs
+ Title: Configure anonymous read access for containers and blobs
-description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container public access setting to make containers and blobs available for anonymous access.
+description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container's anonymous access setting to make containers and blobs available for anonymous access.
Previously updated : 11/09/2022 Last updated : 09/12/2023 ms.devlang: powershell, azurecli
-# Configure anonymous public read access for containers and blobs
+# Configure anonymous read access for containers and blobs
-Azure Storage supports optional anonymous public read access for containers and blobs. By default, anonymous access to your data is never permitted. Unless you explicitly enable anonymous access, all requests to a container and its blobs must be authorized. When you configure a container's public access level setting to permit anonymous access, clients can read data in that container without authorizing the request.
+Azure Storage supports optional anonymous read access for containers and blobs. By default, anonymous access to your data is never permitted. Unless you explicitly enable anonymous access, all requests to a container and its blobs must be authorized. When you configure a container's access level setting to permit anonymous access, clients can read data in that container without authorizing the request.
> [!WARNING]
-> When a container is configured for public access, any client can read data in that container. Public access presents a potential security risk, so if your scenario does not require it, we recommend that you disallow it for the storage account.
+> When a container is configured for anonymous access, any client can read data in that container. Anonymous access presents a potential security risk, so if your scenario does not require it, we recommend that you remediate anonymous access for the storage account.
-This article describes how to configure anonymous public read access for a container and its blobs. For information about how to remediate anonymous access for optimal security, see one of these articles:
+This article describes how to configure anonymous read access for a container and its blobs. For information about how to remediate anonymous access for optimal security, see one of these articles:
-- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)-- [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
+- [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
+- [Remediate anonymous read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
-## About anonymous public read access
+## About anonymous read access
-Public access to your data is always prohibited by default. There are two separate settings that affect public access:
+Anonymous access to your data is always prohibited by default. There are two separate settings that affect anonymous access:
-1. **Allow public access for the storage account.** By default, an Azure Resource Manager storage account allows a user with the appropriate permissions to enable public access to a container. Blob data is not available for public access unless the user takes the additional step to explicitly configure the container's public access setting.
-1. **Configure the container's public access setting.** By default, a container's public access setting is disabled, meaning that authorization is required for every request to the container or its data. A user with the appropriate permissions can modify a container's public access setting to enable anonymous access only if anonymous access is allowed for the storage account.
+1. **Anonymous access setting for the storage account.** An Azure Resource Manager storage account offers a setting to allow or disallow anonymous access for the account. Microsoft recommends disallowing anonymous access for your storage accounts for optimal security.
-The following table summarizes how both settings together affect public access for a container.
+ When anonymous access is permitted at the account level, blob data is not available for anonymous read access unless the user takes the additional step to explicitly configure the container's anonymous access setting.
-| | Public access level for the container is set to Private (default setting) | Public access level for the container is set to Container | Public access level for the container is set to Blob |
+1. **Configure the container's anonymous access setting.** By default, a container's anonymous access setting is disabled, meaning that authorization is required for every request to the container or its data. A user with the appropriate permissions can modify a container's anonymous access setting to enable anonymous access only if anonymous access is allowed for the storage account.
+
+The following table summarizes how the two settings together affect anonymous access for a container.
+
+| | Anonymous access level for the container is set to Private (default setting) | Anonymous access level for the container is set to Container | Anonymous access level for the container is set to Blob |
|--|--|--|--|
-| **Public access is disallowed for the storage account** | No public access to any container in the storage account. | No public access to any container in the storage account. The storage account setting overrides the container setting. | No public access to any container in the storage account. The storage account setting overrides the container setting. |
-| **Public access is allowed for the storage account (default setting)** | No public access to this container (default configuration). | Public access is permitted to this container and its blobs. | Public access is permitted to blobs in this container, but not to the container itself. |
+| **Anonymous access is disallowed for the storage account** | No anonymous access to any container in the storage account. | No anonymous access to any container in the storage account. The storage account setting overrides the container setting. | No anonymous access to any container in the storage account. The storage account setting overrides the container setting. |
+| **Anonymous access is allowed for the storage account** | No anonymous access to this container (default configuration). | Anonymous access is permitted to this container and its blobs. | Anonymous access is permitted to blobs in this container, but not to the container itself. |
-When anonymous public access is permitted for a storage account and configured for a specific container, then a request to read a blob in that container that is passed without an *Authorization* header is accepted by the service, and the blob's data is returned in the response.
+When anonymous access is permitted for a storage account and configured for a specific container, then a request to read a blob in that container that is passed without an *Authorization* header is accepted by the service, and the blob's data is returned in the response.
-## Allow or disallow public read access for a storage account
+## Allow or disallow anonymous read access for a storage account
-By default, a storage account is configured to allow a user with the appropriate permissions to enable public access to a container. When public access is allowed, a user with the appropriate permissions can modify a container's public access setting to enable anonymous public access to the data in that container. Blob data is never available for public access unless the user takes the additional step to explicitly configure the container's public access setting.
+When anonymous access is allowed for a storage account, a user with the appropriate permissions can modify a container's anonymous access setting to enable anonymous access to the data in that container. Blob data is never available for anonymous access unless the user takes the additional step to explicitly configure the container's anonymous access setting.
-Keep in mind that public access to a container is always turned off by default and must be explicitly configured to permit anonymous requests. Regardless of the setting on the storage account, your data will never be available for public access unless a user with appropriate permissions takes this additional step to enable public access on the container.
+Keep in mind that anonymous access to a container is always turned off by default and must be explicitly configured to permit anonymous requests. Regardless of the setting on the storage account, your data will never be available for anonymous access unless a user with appropriate permissions takes this additional step to enable anonymous access on the container.
-Disallowing public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously. For more information, see [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md).
+Disallowing anonymous access for the storage account overrides the access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When anonymous access is disallowed for the account, it is not possible to configure the access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously. For more information, see [Prevent anonymous read access to containers and blobs](anonymous-read-access-prevent.md).
> [!IMPORTANT]
-> After anonymous public access is disallowed for a storage account, clients that use the anonymous bearer challenge will find that Azure Storage returns a 403 error (Forbidden) rather than a 401 error (Unauthorized). We recommend that you make all containers private to mitigate this issue. For more information on modifying the public access setting for containers, see [Set the public access level for a container](anonymous-read-access-configure.md#set-the-public-access-level-for-a-container).
+> After anonymous access is disallowed for a storage account, clients that use the anonymous bearer challenge will find that Azure Storage returns a 403 error (Forbidden) rather than a 401 error (Unauthorized). We recommend that you make all containers private to mitigate this issue. For more information on modifying the anonymous access setting for containers, see [Set the access level for a container](anonymous-read-access-configure.md#set-the-anonymous-access-level-for-a-container).
-Allowing or disallowing blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
+Allowing or disallowing anonymous access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
-### Permissions for disallowing public access
+### Permissions for disallowing anonymous access
To set the **AllowBlobPublicAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** action. Built-in roles with this action include:
To set the **AllowBlobPublicAccess** property for the storage account, a user mu
- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role - The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role
-Role assignments must be scoped to the level of the storage account or higher to permit a user to disallow public access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
+Role assignments must be scoped to the level of the storage account or higher to permit a user to disallow anonymous access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
Be careful to restrict assignment of these roles only to those administrative users who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
The **Microsoft.Storage/storageAccounts/listkeys/action** itself grants data acc
### Set the storage account's AllowBlobPublicAccess property
-To allow or disallow public access for a storage account, configure the account's **AllowBlobPublicAccess** property. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
-
-The **AllowBlobPublicAccess** property is not set for a storage account by default and does not return a value until you explicitly set it. The storage account permits public access when the property value is either **null** or **true**.
+To allow or disallow anonymous access for a storage account, set the account's **AllowBlobPublicAccess** property. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
# [Azure portal](#tab/portal)
-To allow or disallow public access for a storage account in the Azure portal, follow these steps:
+To allow or disallow anonymous access for a storage account in the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal. 1. Locate the **Configuration** setting under **Settings**.
-1. Set **Blob public access** to **Enabled** or **Disabled**.
+1. Set **Allow Blob anonymous access** to **Enabled** or **Disabled**.
- :::image type="content" source="media/anonymous-read-access-configure/blob-public-access-portal.png" alt-text="Screenshot showing how to allow or disallow blob public access for account":::
+ :::image type="content" source="media/anonymous-read-access-configure/blob-public-access-portal.png" alt-text="Screenshot showing how to allow or disallow anonymous access for account":::
# [PowerShell](#tab/powershell)
-To allow or disallow public access for a storage account with PowerShell, install [Azure PowerShell version 4.4.0](https://www.powershellgallery.com/packages/Az/4.4.0) or later. Next, configure the **AllowBlobPublicAccess** property for a new or existing storage account.
+To allow or disallow anonymous access for a storage account with PowerShell, install [Azure PowerShell version 4.4.0](https://www.powershellgallery.com/packages/Az/4.4.0) or later. Next, configure the **AllowBlobPublicAccess** property for a new or existing storage account.
The following example creates a storage account and explicitly sets the **AllowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
$rgName = "<resource-group>"
$accountName = "<storage-account>" $location = "<location>"
-# Create a storage account with AllowBlobPublicAccess set to false.
+# Create a storage account with AllowBlobPublicAccess explicitly set to false.
New-AzStorageAccount -ResourceGroupName $rgName ` -Name $accountName ` -Location $location `
New-AzStorageAccount -ResourceGroupName $rgName `
# [Azure CLI](#tab/azure-cli)
-To allow or disallow public access for a storage account with Azure CLI, install Azure CLI version 2.9.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). Next, configure the **allowBlobPublicAccess** property for a new or existing storage account.
+To allow or disallow anonymous access for a storage account with Azure CLI, install Azure CLI version 2.9.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). Next, configure the **allowBlobPublicAccess** property for a new or existing storage account.
The following example creates a storage account and explicitly sets the **allowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
az storage account show \
# [Template](#tab/template)
-To allow or disallow public access for a storage account with a template, create a template with the **AllowBlobPublicAccess** property set to **true** or **false**. The following steps describe how to create a template in the Azure portal.
+To allow or disallow anonymous access for a storage account with a template, create a template with the **AllowBlobPublicAccess** property set to **true** or **false**. The following steps describe how to create a template in the Azure portal.
1. In the Azure portal, choose **Create a resource**. 1. In **Search services and marketplace**, type **template deployment**, and then press **ENTER**.
To allow or disallow public access for a storage account with a template, create
> [!NOTE]
-> Disallowing public access for a storage account does not affect any static websites hosted in that storage account. The **$web** container is always publicly accessible.
+> Disallowing anonymous access for a storage account does not affect any static websites hosted in that storage account. The **$web** container is always publicly accessible.
>
-> After you update the public access setting for the storage account, it may take up to 30 seconds before the change is fully propagated.
+> After you update the anonymous access setting for the storage account, it may take up to 30 seconds before the change is fully propagated.
+
+When a container is configured for anonymous access, requests to read blobs in that container do not need to be authorized. However, any firewall rules that are configured for the storage account remain in effect and will block traffic inline with the configured ACLs.
-When a container is configured for anonymous public access, requests to read blobs in that container do not need to be authorized. However, any firewall rules that are configured for the storage account remain in effect and will block traffic inline with the configured ACLs.
+Allowing or disallowing anonymous access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
-Allowing or disallowing blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
+The examples in this section showed how to read the **AllowBlobPublicAccess** property for the storage account to determine whether anonymous access is currently allowed or disallowed. To learn how to verify that an account's anonymous access setting is configured to prevent anonymous access, see [Remediate anonymous access for the storage account](anonymous-read-access-prevent.md#remediate-anonymous-access-for-the-storage-account).
-The examples in this section showed how to read the **AllowBlobPublicAccess** property for the storage account to determine if public access is currently allowed or disallowed. To learn more about how to verify that an account's public access setting is configured to prevent anonymous access, see [Remediate anonymous public access for the storage account](anonymous-read-access-prevent.md#remediate-anonymous-public-access-for-the-storage-account).
+## Set the anonymous access level for a container
-## Set the public access level for a container
+To grant anonymous users read access to a container and its blobs, first allow anonymous access for the storage account, then set the container's anonymous access level. If anonymous access is denied for the storage account, you will not be able to configure anonymous access for a container.
-To grant anonymous users read access to a container and its blobs, first allow public access for the storage account, then set the container's public access level. If public access is denied for the storage account, you will not be able to configure public access for a container.
+> [!CAUTION]
+> Microsoft recommends against permitting anonymous access to blob data in your storage account.
-When public access is allowed for a storage account, you can configure a container with the following permissions:
+When anonymous access is allowed for a storage account, you can configure a container with the following permissions:
- **No public read access:** The container and its blobs can be accessed only with an authorized request. This option is the default for all new containers. - **Public read access for blobs only:** Blobs within the container can be read by anonymous request, but container data is not available anonymously. Anonymous clients cannot enumerate the blobs within the container. - **Public read access for container and its blobs:** Container and blob data can be read by anonymous request, except for container permission settings and container metadata. Clients can enumerate blobs within the container by anonymous request, but cannot enumerate containers within the storage account.
-You cannot change the public access level for an individual blob. Public access level is set only at the container level. You can set the container's public access level when you create the container, or you can update the setting on an existing container.
+You cannot change the anonymous access level for an individual blob. Anonymous access level is set only at the container level. You can set the container's anonymous access level when you create the container, or you can update the setting on an existing container.
# [Azure portal](#tab/portal)
-To update the public access level for one or more existing containers in the Azure portal, follow these steps:
+To update the anonymous access level for one or more existing containers in the Azure portal, follow these steps:
1. Navigate to your storage account overview in the Azure portal.
-1. Under **Data storage** on the menu blade, select **Blob containers**.
-1. Select the containers for which you want to set the public access level.
-1. Use the **Change access level** button to display the public access settings.
-1. Select the desired public access level from the **Public access level** dropdown and click the OK button to apply the change to the selected containers.
+1. Under **Data storage** on the menu blade, select **Containers**.
+1. Select the containers for which you want to set the anonymous access level.
+1. Use the **Change access level** button to display the anonymous access settings.
+1. Select the desired anonymous access level from the **Anonymous access level** dropdown and click the OK button to apply the change to the selected containers.
- :::image type="content" source="media/anonymous-read-access-configure/configure-public-access-container.png" alt-text="Screenshot showing how to set public access level in the portal." lightbox="media/anonymous-read-access-configure/configure-public-access-container.png":::
+ :::image type="content" source="media/anonymous-read-access-configure/configure-public-access-container.png" alt-text="Screenshot showing how to set anonymous access level in the portal." lightbox="media/anonymous-read-access-configure/configure-public-access-container.png":::
-When public access is disallowed for the storage account, a container's public access level cannot be set. If you attempt to set the container's public access level, you'll see that the setting is disabled because public access is disallowed for the account.
+When anonymous access is disallowed for the storage account, a container's anonymous access level cannot be set. If you attempt to set the container's anonymous access level, you'll see that the setting is disabled because anonymous access is disallowed for the account.
# [PowerShell](#tab/powershell)
-To update the public access level for one or more containers with PowerShell, call the [Set-AzStorageContainerAcl](/powershell/module/az.storage/set-azstoragecontaineracl) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's public access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
+To update the anonymous access level for one or more containers with PowerShell, call the [Set-AzStorageContainerAcl](/powershell/module/az.storage/set-azstoragecontaineracl) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's anonymous access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
-The following example creates a container with public access disabled, and then updates the container's public access setting to permit anonymous access to the container and its blobs. Remember to replace the placeholder values in brackets with your own values:
+The following example creates a container with anonymous access disabled, and then updates the container's anonymous access setting to permit anonymous access to the container and its blobs. Remember to replace the placeholder values in brackets with your own values:
```powershell # Set variables.
$accountName = "<storage-account>"
# Get context object. $storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName $ctx = $storageAccount.Context
-# Create a new container with public access setting set to Off.
+# Create a new container with anonymous access setting set to Off.
$containerName = "<container>" New-AzStorageContainer -Name $containerName -Permission Off -Context $ctx
-# Read the container's public access setting.
+# Read the container's anonymous access setting.
Get-AzStorageContainerAcl -Container $containerName -Context $ctx
-# Update the container's public access setting to Container.
+# Update the container's anonymous access setting to Container.
Set-AzStorageContainerAcl -Container $containerName -Permission Container -Context $ctx
-# Read the container's public access setting.
+# Read the container's anonymous access setting.
Get-AzStorageContainerAcl -Container $containerName -Context $ctx ```
-When public access is disallowed for the storage account, a container's public access level cannot be set. If you attempt to set the container's public access level, Azure Storage returns error indicating that public access is not permitted on the storage account.
+When anonymous access is disallowed for the storage account, a container's anonymous access level cannot be set. If you attempt to set the container's anonymous access level, Azure Storage returns error indicating that anonymous access is not permitted on the storage account.
# [Azure CLI](#tab/azure-cli)
-To update the public access level for one or more containers with Azure CLI, call the [az storage container set permission](/cli/azure/storage/container#az-storage-container-set-permission) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's public access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
+To update the anonymous access level for one or more containers with Azure CLI, call the [az storage container set permission](/cli/azure/storage/container#az-storage-container-set-permission) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's anonymous access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
-The following example creates a container with public access disabled, and then updates the container's public access setting to permit anonymous access to the container and its blobs. Remember to replace the placeholder values in brackets with your own values:
+The following example creates a container with anonymous access disabled, and then updates the container's anonymous access setting to permit anonymous access to the container and its blobs. Remember to replace the placeholder values in brackets with your own values:
```azurecli-interactive az storage container create \
az storage container show-permission \
--auth-mode key ```
-When public access is disallowed for the storage account, a container's public access level cannot be set. If you attempt to set the container's public access level, Azure Storage returns error indicating that public access is not permitted on the storage account.
+When anonymous access is disallowed for the storage account, a container's anonymous access level cannot be set. If you attempt to set the container's anonymous access level, Azure Storage returns error indicating that anonymous access is not permitted on the storage account.
# [Template](#tab/template)
N/A.
-## Check the public access setting for a set of containers
+## Check the anonymous access setting for a set of containers
-It is possible to check which containers in one or more storage accounts are configured for public access by listing the containers and checking the public access setting. This approach is a practical option when a storage account does not contain a large number of containers, or when you are checking the setting across a small number of storage accounts. However, performance may suffer if you attempt to enumerate a large number of containers.
+It is possible to check which containers in one or more storage accounts are configured for anonymous access by listing the containers and checking the anonymous access setting. This approach is a practical option when a storage account does not contain a large number of containers, or when you are checking the setting across a small number of storage accounts. However, performance may suffer if you attempt to enumerate a large number of containers.
-The following example uses PowerShell to get the public access setting for all containers in a storage account. Remember to replace the placeholder values in brackets with your own values:
+The following example uses PowerShell to get the anonymous access setting for all containers in a storage account. Remember to replace the placeholder values in brackets with your own values:
```powershell $rgName = "<resource-group>"
Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess
## Next steps -- [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md)
+- [Prevent anonymous read access to containers and blobs](anonymous-read-access-prevent.md)
- [Access public containers and blobs anonymously with .NET](anonymous-read-access-client.md) - [Authorizing access to Azure Storage](../common/authorize-data-access.md)
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
Title: Overview of remediating anonymous public read access for blob data
+ Title: Overview of remediating anonymous read access for blob data
-description: Learn how to remediate anonymous public read access to blob data for both Azure Resource Manager and classic storage accounts.
+description: Learn how to remediate anonymous read access to blob data for both Azure Resource Manager and classic storage accounts.
Previously updated : 11/09/2022 Last updated : 09/12/2023
-# Overview: Remediating anonymous public read access for blob data
+# Overview: Remediating anonymous read access for blob data
-Azure Storage supports optional anonymous public read access for containers and blobs. By default, anonymous access to your data is never permitted. Unless you explicitly enable anonymous access, all requests to a container and its blobs must be authorized. We recommend that you disable anonymous public access for all of your storage accounts.
+Azure Storage supports optional anonymous read access for containers and blobs. By default, anonymous access to your data is never permitted. Unless you explicitly enable anonymous access, all requests to a container and its blobs must be authorized. We recommend that you disable anonymous access for all of your storage accounts.
-This article provides an overview of how to remediate anonymous public access for your storage accounts.
+This article provides an overview of how to remediate anonymous access for your storage accounts.
> [!WARNING]
-> Anonymous public access presents a security risk. We recommend that you take the actions described in the following section to remediate public access for all of your storage accounts, unless your scenario specifically requires anonymous access.
+> Anonymous access presents a security risk. We recommend that you take the actions described in the following section to remediate anonymous access for all of your storage accounts, unless your scenario specifically requires anonymous access.
-## Recommendations for remediating anonymous public access
+## Recommendations for remediating anonymous access
-To remediate anonymous public access, first determine whether your storage account uses the Azure Resource Manager deployment model or the classic deployment model. For more information, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
+To remediate anonymous access, first determine whether your storage account uses the Azure Resource Manager deployment model or the classic deployment model. For more information, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
### Azure Resource Manager accounts
-If your storage account is using the Azure Resource Manager deployment model, then you can remediate public access by setting the account's **AllowBlobPublicAccess** property to **False**. After you set the **AllowBlobPublicAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the public access setting for any individual container.
+If your storage account is using the Azure Resource Manager deployment model, then you can remediate anonymous access for an account at any time by setting the account's **AllowBlobPublicAccess** property to **False**. After you set the **AllowBlobPublicAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the anonymous access setting for any individual container.
-To learn more about how to remediate public access for Azure Resource Manager accounts, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
+To learn more about how to remediate anonymous access for Azure Resource Manager accounts, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
### Classic accounts
-If your storage account is using the classic deployment model, then you can remediate public access by setting each container's public access property to **Private**. To learn more about how to remediate public access for classic storage accounts, see [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md).
+If your storage account is using the classic deployment model, then you can remediate anonymous access by setting each container's access property to **Private**. To learn more about how to remediate anonymous access for classic storage accounts, see [Remediate anonymous read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md).
### Scenarios requiring anonymous access
-If your scenario requires that certain containers need to be available for public access, then you should move those containers and their blobs into separate storage accounts that are reserved only for public access. You can then disallow public access for any other storage accounts using the recommendations provided in [Recommendations for remediating anonymous public access](#recommendations-for-remediating-anonymous-public-access).
+If your scenario requires that certain containers need to be available for anonymous access, then you should move those containers and their blobs into separate storage accounts that are reserved only for anonymous access. You can then disallow anonymous access for any other storage accounts using the recommendations provided in [Recommendations for remediating anonymous access](#recommendations-for-remediating-anonymous-access).
-For information on how to configure containers for public access, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
+For information on how to configure containers for anonymous access, see [Configure anonymous read access for containers and blobs](anonymous-read-access-configure.md).
## Next steps -- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)-- [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
+- [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
+- [Remediate anonymous read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
storage Anonymous Read Access Prevent Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md
Title: Remediate anonymous public read access to blob data (classic deployments)
+ Title: Remediate anonymous read access to blob data (classic deployments)
-description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous public access to containers.
+description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous access to containers.
Previously updated : 11/09/2022 Last updated : 09/12/2023 ms.devlang: powershell, azurecli
-# Remediate anonymous public read access to blob data (classic deployments)
+# Remediate anonymous read access to blob data (classic deployments)
-Azure Blob Storage supports optional anonymous public read access to containers and blobs. However, anonymous access may present a security risk. We recommend that you disable anonymous access for optimal security. Disallowing public access helps to prevent data breaches caused by undesired anonymous access.
+Azure Blob Storage supports optional anonymous read access to containers and blobs. However, anonymous access may present a security risk. We recommend that you disable anonymous access for optimal security. Disallowing anonymous access helps to prevent data breaches caused by undesired anonymous access.
-By default, public access to your blob data is always prohibited. However, the default configuration for a classic storage account permits a user with appropriate permissions to configure public access to containers and blobs in a storage account. To prevent public access to a classic storage account, you must configure each container in the account to block public access.
+By default, anonymous access to your blob data is always prohibited. However, the default configuration for a classic storage account permits a user with appropriate permissions to configure anonymous access to containers and blobs in a storage account. To prevent anonymous access to a classic storage account, you must configure each container in the account to block anonymous access.
-If your storage account is using the classic deployment model, we recommend that you [migrate](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts) to the Azure Resource Manager deployment model as soon as possible. After you migrate your account, you can configure it to disallow anonymous public access at the account level. For information about how to disallow anonymous public access for an Azure Resource Manager account, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
+If your storage account is using the classic deployment model, we recommend that you [migrate](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts) to the Azure Resource Manager deployment model as soon as possible. After you migrate your account, you can configure it to disallow anonymous access at the account level. For information about how to disallow anonymous access for an Azure Resource Manager account, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
-If you cannot migrate your classic storage accounts at this time, then you should remediate public access to those accounts now by setting all containers to be private. This article describes how to remediate access to the containers in a classic storage account.
+If you cannot migrate your classic storage accounts at this time, then you should remediate anonymous access to those accounts now by setting all containers to be private. This article describes how to remediate access to the containers in a classic storage account.
Azure Storage accounts that use the classic deployment model will be retired on August 31, 2024. For more information, see [Azure classic storage accounts will be retired on 31 August 2024](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/). > [!WARNING]
-> Anonymous public access presents a security risk. We recommend that you take the actions described in the following section to remediate public access for all of your classic storage accounts, unless your scenario specifically requires anonymous access.
+> Anonymous access presents a security risk. We recommend that you take the actions described in the following section to remediate anonymous access for all of your classic storage accounts, unless your scenario specifically requires anonymous access.
## Block anonymous access to containers
-To remediate anonymous access for a classic storage account, set the public access level for each container in the account to **Private**.
+To remediate anonymous access for a classic storage account, set the anonymous access level for each container in the account to **Private**.
# [Azure portal](#tab/portal)
-To remediate public access for one or more containers in the Azure portal, follow these steps:
+To remediate anonymous access for one or more containers in the Azure portal, follow these steps:
1. Navigate to your storage account overview in the Azure portal. 1. Under **Data storage** on the menu blade, select **Blob containers**.
-1. Select the containers for which you want to set the public access level.
-1. Use the **Change access level** button to display the public access settings.
-1. Select **Private (no anonymous access)** from the **Public access level** dropdown and click the OK button to apply the change to the selected containers.
+1. Select the containers for which you want to set the anonymous access level.
+1. Use the **Change access level** button to display the access settings.
+1. Select **Private (no anonymous access)** from the **Anonymous access level** dropdown and click the OK button to apply the change to the selected containers.
- :::image type="content" source="media/anonymous-read-access-prevent-classic/configure-public-access-container.png" alt-text="Screenshot showing how to set public access level in the portal." lightbox="media/anonymous-read-access-prevent-classic/configure-public-access-container.png":::
+ :::image type="content" source="media/anonymous-read-access-prevent-classic/configure-public-access-container.png" alt-text="Screenshot showing how to set anonymous access level in the portal." lightbox="media/anonymous-read-access-prevent-classic/configure-public-access-container.png":::
# [PowerShell](#tab/powershell)
-To remediate anonymous access for one or more containers with PowerShell, call the [Set-AzStorageContainerAcl](/powershell/module/az.storage/set-azstoragecontaineracl) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's public access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
+To remediate anonymous access for one or more containers with PowerShell, call the [Set-AzStorageContainerAcl](/powershell/module/az.storage/set-azstoragecontaineracl) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's anonymous access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
The following example updates a container's anonymous access setting to make the container private. Remember to replace the placeholder values in brackets with your own values:
$accountName = "<storage-account>"
$storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName $ctx = $storageAccount.Context
-# Read the container's public access setting.
+# Read the container's anonymous access setting.
Get-AzStorageContainerAcl -Container $containerName -Context $ctx
-# Update the container's public access setting to Off.
+# Update the container's anonymous access setting to Off.
Set-AzStorageContainerAcl -Container $containerName -Permission Off -Context $ctx ``` # [Azure CLI](#tab/azure-cli)
-To remediate anonymous access for one or more containers with Azure CLI, call the [az storage container set permission](/cli/azure/storage/container#az-storage-container-set-permission) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's public access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
+To remediate anonymous access for one or more containers with Azure CLI, call the [az storage container set permission](/cli/azure/storage/container#az-storage-container-set-permission) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's anonymous access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
The following example updates a container's anonymous access setting to make the container private. Remember to replace the placeholder values in brackets with your own values: ```azurecli-interactive
-# Read the container's public access setting.
+# Read the container's anonymous access setting.
az storage container show-permission \ --name <container-name> \ --account-name <account-name> \ --account-key <account-key> \ --auth-mode key
-# Update the container's public access setting to Off.
+# Update the container's anonymous access setting to Off.
az storage container set-permission \ --name <container-name> \ --account-name <account-name> \
az storage container set-permission \
-## Check the public access setting for a set of containers
+## Check the anonymous access setting for a set of containers
-It is possible to check which containers in one or more storage accounts are configured for public access by listing the containers and checking the public access setting. This approach is a practical option when a storage account does not contain a large number of containers, or when you are checking the setting across a small number of storage accounts. However, performance may suffer if you attempt to enumerate a large number of containers.
+It is possible to check which containers in one or more storage accounts are configured for anonymous access by listing the containers and checking the anonymous access setting. This approach is a practical option when a storage account does not contain a large number of containers, or when you are checking the setting across a small number of storage accounts. However, performance may suffer if you attempt to enumerate a large number of containers.
-The following example uses PowerShell to get the public access setting for all containers in a storage account. Remember to replace the placeholder values in brackets with your own values:
+The following example uses PowerShell to get the anonymous access setting for all containers in a storage account. Remember to replace the placeholder values in brackets with your own values:
```powershell $rgName = "<resource-group>"
Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess
## Sample script for bulk remediation
-The following sample PowerShell script runs against all classic storage accounts in a subscription and sets the public access setting for the containers in those accounts to **Private**.
+The following sample PowerShell script runs against all classic storage accounts in a subscription and sets the anonymous access setting for the containers in those accounts to **Private**.
> [!CAUTION]
-> Running this script against storage accounts with very large numbers of containers may require significant resources and take a long time. If you have a storage account with a very large number of containers, you may wish to devise a different approach for remediating public access.
+> Running this script against storage accounts with very large numbers of containers may require significant resources and take a long time. If you have a storage account with a very large number of containers, you may wish to devise a different approach for remediating anonymous access.
```powershell # This script runs against all classic storage accounts in a single subscription
write-host "Script complete"
## See also -- [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md)-- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
+- [Overview: Remediating anonymous read access for blob data](anonymous-read-access-overview.md)
+- [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
Title: Remediate anonymous public read access to blob data (Azure Resource Manager deployments)
+ Title: Remediate anonymous read access to blob data (Azure Resource Manager deployments)
-description: Learn how to analyze anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container.
+description: Learn how to analyze current anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container.
Previously updated : 05/23/2023 Last updated : 09/12/2023 ms.devlang: powershell, azurecli
-# Remediate anonymous public read access to blob data (Azure Resource Manager deployments)
+# Remediate anonymous read access to blob data (Azure Resource Manager deployments)
-Azure Blob Storage supports optional anonymous public read access to containers and blobs. However, anonymous access may present a security risk. We recommend that you disable anonymous access for optimal security. Disallowing public access helps to prevent data breaches caused by undesired anonymous access.
+Azure Blob Storage supports optional anonymous read access to containers and blobs. However, anonymous access may present a security risk. We recommend that you disable anonymous access for optimal security. Disallowing anonymous access helps to prevent data breaches caused by undesired anonymous access.
-By default, public access to your blob data is always prohibited. However, the default configuration for an Azure Resource Manager storage account permits a user with appropriate permissions to configure public access to containers and blobs in a storage account. You can disallow all public access to an Azure Resource Manager storage account, regardless of the public access setting for an individual container, by setting the **AllowBlobPublicAccess** property on the storage account to **False**.
+By default, anonymous access to your blob data is always prohibited. The default configuration for an Azure Resource Manager storage account prohibits users from configuring anonymous access to containers and blobs in a storage account. This default configuration disallows all anonymous access to an Azure Resource Manager storage account, regardless of the access setting for an individual container.
-After you disallow public blob access for the storage account, Azure Storage rejects all anonymous requests to that account. Disallowing public access to a storage account prevents users from subsequently configuring public access for containers in that account. Any containers that have already been configured for public access will no longer accept anonymous requests.
+When anonymous access for the storage account is disallowed, Azure Storage rejects all anonymous read requests against blob data. Users can't later configure anonymous access for containers in that account. Any containers that have already been configured for anonymous access will no longer accept anonymous requests.
> [!WARNING]
-> When a container is configured for public access, any client can read data in that container. Public access presents a potential security risk, so if your scenario does not require it, we recommend that you disallow it for the storage account.
+> When a container is configured for anonymous access, any client can read data in that container. Anonymous access presents a potential security risk, so if your scenario does not require it, we recommend that you disallow it for the storage account.
## Remediation for Azure Resource Manager versus classic storage accounts
-This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage public access for storage accounts that are using the Azure Resource Manager deployment model. All general-purpose v2 storage accounts, premium block blob storage accounts, premium file share accounts, and Blob Storage accounts use the Azure Resource Manager deployment model. Some older general-purpose v1 accounts and premium page blob accounts may use the classic deployment model.
+This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage anonymous access for storage accounts that are using the Azure Resource Manager deployment model. All general-purpose v2 storage accounts, premium block blob storage accounts, premium file share accounts, and Blob Storage accounts use the Azure Resource Manager deployment model. Some older general-purpose v1 accounts and premium page blob accounts may use the classic deployment model.
If your storage account is using the classic deployment model, we recommend that you migrate to the Azure Resource Manager deployment model as soon as possible. Azure Storage accounts that use the classic deployment model will be retired on August 31, 2024. For more information, see [Azure classic storage accounts will be retired on 31 August 2024](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/).
-If you can't migrate your classic storage accounts at this time, then you should remediate public access to those accounts now. To learn how to remediate public access for classic storage accounts, see [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md). For more information about Azure deployment models, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
+If you can't migrate your classic storage accounts at this time, then you should remediate anonymous access to those accounts now. To learn how to remediate anonymous access for classic storage accounts, see [Remediate anonymous read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md). For more information about Azure deployment models, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
-## About anonymous public read access
+## About anonymous read access
-Anonymous public access to your data is always prohibited by default. There are two separate settings that affect public access:
+Anonymous access to your data is always prohibited by default. There are two separate settings that affect anonymous access:
-1. **Allow public access for the storage account.** By default, a storage account allows a user with the appropriate permissions to enable public access to a container. Blob data isn't available for public access unless the user takes the additional step to explicitly configure the container's public access setting.
-1. **Configure the container's public access setting.** By default, a container's public access setting is disabled, meaning that authorization is required for every request to the container or its data. A user with the appropriate permissions can modify a container's public access setting to enable anonymous access only if anonymous access is allowed for the storage account.
+1. **Anonymous access setting for the storage account.** An Azure Resource Manager storage account offers a setting to allow or disallow anonymous access for the account. Microsoft recommends disallowing anonymous access for your storage accounts for optimal security.
-The following table summarizes how both settings together affect public access for a container.
+ When anonymous access is permitted at the account level, blob data isn't available for anonymous read access unless the user takes the additional step to explicitly configure the container's anonymous access setting.
-| | Public access level for the container is set to Private (default setting) | Public access level for the container is set to Container | Public access level for the container is set to Blob |
+1. **Configure the container's anonymous access setting.** By default, a container's anonymous access setting is disabled, meaning that authorization is required for every request to the container or its data. A user with the appropriate permissions can modify a container's anonymous access setting to enable anonymous access only if anonymous access is allowed for the storage account.
+
+The following table summarizes how the two settings together affect anonymous access for a container.
+
+| | Anonymous access level for the container is set to Private (default setting) | Anonymous access level for the container is set to Container | Anonymous access level for the container is set to Blob |
|--|--|--|--|
-| **Public access is disallowed for the storage account** | **Recommended.** No public access to any container in the storage account. | No public access to any container in the storage account. The storage account setting overrides the container setting. | No public access to any container in the storage account. The storage account setting overrides the container setting. |
-| **Public access is allowed for the storage account (default setting)** | No public access to this container (default configuration). | **Not recommended.** Public access is permitted to this container and its blobs. | **Not recommended.** Public access is permitted to blobs in this container, but not to the container itself. |
+| **Anonymous access is disallowed for the storage account** | No anonymous access to any container in the storage account. | No anonymous access to any container in the storage account. The storage account setting overrides the container setting. | No anonymous access to any container in the storage account. The storage account setting overrides the container setting. |
+| **Anonymous access is allowed for the storage account** | No anonymous access to this container (default configuration). | Anonymous access is permitted to this container and its blobs. | Anonymous access is permitted to blobs in this container, but not to the container itself. |
-When anonymous public access is permitted for a storage account and configured for a specific container, then a request to read a blob in that container that is passed without an *Authorization* header is accepted by the service, and the blob's data is returned in the response.
+When anonymous access is permitted for a storage account and configured for a specific container, then a request to read a blob in that container that is passed without an *Authorization* header is accepted by the service, and the blob's data is returned in the response.
## Detect anonymous requests from client applications
-When you disallow public read access for a storage account, you risk rejecting requests to containers and blobs that are currently configured for public access. Disallowing public access for a storage account overrides the public access settings for individual containers in that storage account. When public access is disallowed for the storage account, any future anonymous requests to that account will fail.
+When you disallow anonymous read access for a storage account, you risk rejecting requests to containers and blobs that are currently configured for anonymous access. Disallowing anonymous access for a storage account overrides the access settings for individual containers in that storage account. When anonymous access is disallowed for the storage account, any future anonymous requests to that account will fail.
-To understand how disallowing public access may affect client applications, we recommend that you enable logging and metrics for that account and analyze patterns of anonymous requests over an interval of time. Use metrics to determine the number of anonymous requests to the storage account, and use logs to determine which containers are being accessed anonymously.
+To understand how disallowing anonymous access may affect client applications, we recommend that you enable logging and metrics for that account and analyze patterns of anonymous requests over an interval of time. Use metrics to determine the number of anonymous requests to the storage account, and use logs to determine which containers are being accessed anonymously.
### Monitor anonymous requests with Metrics Explorer
You can also configure an alert rule based on this query to notify you about ano
When Blob Storage receives an anonymous request, that request will succeed if all of the following conditions are true: -- Anonymous public access is allowed for the storage account.-- The container is configured to allow anonymous public access.
+- Anonymous access is allowed for the storage account.
+- The targeted container is configured to allow anonymous access.
- The request is for read access. If any of those conditions are not true, then the request will fail. The response code on failure depends on whether the anonymous request was made with a version of the service that supports the bearer challenge. The bearer challenge is supported with service versions 2019-12-12 and newer: - If the anonymous request was made with a service version that supports the bearer challenge, then the service returns error code 401 (Unauthorized).-- If the anonymous request was made with a service version that does not support the bearer challenge and anonymous public access is disallowed for the storage account, then the service returns error code 409 (Conflict).-- If the anonymous request was made with a service version that does not support the bearer challenge and anonymous public access is allowed for the storage account, then the service returns error code 404 (Not Found).
+- If the anonymous request was made with a service version that does not support the bearer challenge and anonymous access is disallowed for the storage account, then the service returns error code 409 (Conflict).
+- If the anonymous request was made with a service version that does not support the bearer challenge and anonymous access is allowed for the storage account, then the service returns error code 404 (Not Found).
For more information about the bearer challenge, see [Bearer challenge](/rest/api/storageservices/authorize-with-azure-active-directory#bearer-challenge).
-## Remediate anonymous public access for the storage account
+## Remediate anonymous access for the storage account
-After you have evaluated anonymous requests to containers and blobs in your storage account, you can take action to remediate public access for the whole account by setting the account's **AllowBlobPublicAccess** property to **False**.
+After you have evaluated anonymous requests to containers and blobs in your storage account, you can take action to remediate anonymous access for the whole account by setting the account's **AllowBlobPublicAccess** property to **False**.
-The public access setting for a storage account overrides the individual settings for containers in that account. When you disallow public access for a storage account, any containers that are configured to permit public access are no longer accessible anonymously. If you've disallowed public access for the account, you don't also need to disable public access for individual containers.
+The anonymous access setting for a storage account overrides the individual settings for containers in that account. When you disallow anonymous access for a storage account, any containers that are configured to permit anonymous access are no longer accessible anonymously. If you've disallowed anonymous access for the account, you don't also need to disable anonymous access for individual containers.
-If your scenario requires that certain containers need to be available for public access, then you should move those containers and their blobs into separate storage accounts that are reserved for public access. You can then disallow public access for any other storage accounts.
+If your scenario requires that certain containers need to be available for anonymous access, then you should move those containers and their blobs into separate storage accounts that are reserved for anonymous access. You can then disallow anonymous access for any other storage accounts.
-Remediating blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
+Remediating anonymous access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
-### Permissions for disallowing public access
+### Permissions for disallowing anonymous access
To set the **AllowBlobPublicAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** action. Built-in roles with this action include:
To set the **AllowBlobPublicAccess** property for the storage account, a user mu
- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role - The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role
-Role assignments must be scoped to the level of the storage account or higher to permit a user to disallow public access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
+Role assignments must be scoped to the level of the storage account or higher to permit a user to disallow anonymous access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
Be careful to restrict assignment of these roles only to those administrative users who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
The **Microsoft.Storage/storageAccounts/listkeys/action** itself grants data acc
### Set the storage account's AllowBlobPublicAccess property to False
-To disallow public access for a storage account, set the account's **AllowBlobPublicAccess** property to **False**. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
-
-The **AllowBlobPublicAccess** property isn't set for a storage account by default and doesn't return a value until you explicitly set it. The storage account permits public access when the property value is either **null** or **true**.
+To disallow anonymous access for a storage account, set the account's **AllowBlobPublicAccess** property to **False**. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
> [!IMPORTANT]
-> Disallowing public access for a storage account overrides the public access settings for all containers in that storage account. When public access is disallowed for the storage account, any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously by following the steps outlined in [Detect anonymous requests from client applications](#detect-anonymous-requests-from-client-applications).
+> Disallowing anonymous access for a storage account overrides the access settings for all containers in that storage account. When anonymous access is disallowed for the storage account, any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously by following the steps outlined in [Detect anonymous requests from client applications](#detect-anonymous-requests-from-client-applications).
# [Azure portal](#tab/portal)
-To disallow public access for a storage account in the Azure portal, follow these steps:
+To disallow anonymous access for a storage account in the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal. 1. Locate the **Configuration** setting under **Settings**.
-1. Set **Blob public access** to **Disabled**.
+1. Set **Allow Blob anonymous access** to **Disabled**.
- :::image type="content" source="media/anonymous-read-access-prevent/blob-public-access-portal.png" alt-text="Screenshot showing how to disallow blob public access for account":::
+ :::image type="content" source="media/anonymous-read-access-prevent/blob-public-access-portal.png" alt-text="Screenshot showing how to disallow anonymous access for account":::
# [PowerShell](#tab/powershell)
-To disallow public access for a storage account with PowerShell, install [Azure PowerShell version 4.4.0](https://www.powershellgallery.com/packages/Az/4.4.0) or later. Next, configure the **AllowBlobPublicAccess** property for a new or existing storage account.
+To disallow anonymous access for a storage account with PowerShell, install [Azure PowerShell version 4.4.0](https://www.powershellgallery.com/packages/Az/4.4.0) or later. Next, configure the **AllowBlobPublicAccess** property for a new or existing storage account.
The following example creates a storage account and explicitly sets the **AllowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
New-AzStorageAccount -ResourceGroupName $rgName `
# [Azure CLI](#tab/azure-cli)
-To disallow public access for a storage account with Azure CLI, install Azure CLI version 2.9.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). Next, configure the **allowBlobPublicAccess** property for a new or existing storage account.
+To disallow anonymous access for a storage account with Azure CLI, install Azure CLI version 2.9.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). Next, configure the **allowBlobPublicAccess** property for a new or existing storage account.
The following example creates a storage account and explicitly sets the **allowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
az storage account show \
# [Template](#tab/template)
-To disallow public access for a storage account with a template, create a template with the **AllowBlobPublicAccess** property set to **false**. The following steps describe how to create a template in the Azure portal.
+To disallow anonymous access for a storage account with a template, create a template with the **AllowBlobPublicAccess** property set to **false**. The following steps describe how to create a template in the Azure portal.
1. In the Azure portal, choose **Create a resource**. 1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
To disallow public access for a storage account with a template, create a templa
> [!NOTE]
-> Disallowing public access for a storage account does not affect any static websites hosted in that storage account. The **$web** container is always publicly accessible.
+> Disallowing anonymous access for a storage account does not affect any static websites hosted in that storage account. The **$web** container is always publicly accessible.
>
-> After you update the public access setting for the storage account, it may take up to 30 seconds before the change is fully propagated.
+> After you update the anonymous access setting for the storage account, it may take up to 30 seconds before the change is fully propagated.
## Sample script for bulk remediation
process {
{ if($account.AllowBlobPublicAccess -eq $null -or $account.AllowBlobPublicAccess -eq $true) {
- Write-host "Account:" $account.StorageAccountName " is not disallowing public access."
+ Write-host "Account:" $account.StorageAccountName " isn't disallowing public access."
if ( ! $ReadOnly.IsPresent ) { if(!$BypassConfirmation)
end {
## Verify that anonymous access has been remediated
-To verify that you've remediated anonymous access for a storage account, you can test that anonymous access to a blob isn't permitted, that modifying a container's public access setting isn't permitted, and that it's not possible to create a container with anonymous access enabled.
+To verify that you've remediated anonymous access for a storage account, you can test that anonymous access to a blob isn't permitted, that modifying a container's access setting isn't permitted, and that it's not possible to create a container with anonymous access enabled.
-### Verify that public access to a blob isn't permitted
+### Verify that anonymous access to a blob isn't permitted
-To verify that public access to a specific blob is disallowed, you can attempt to download the blob via its URL. If the download succeeds, then the blob is still publicly available. If the blob isn't publicly accessible because public access has been disallowed for the storage account, then you'll see an error message indicating that public access isn't permitted on this storage account.
+To verify that anonymous access to a specific blob is disallowed, you can attempt to download the blob via its URL. If the download succeeds, then the blob is still publicly available. If the blob isn't publicly accessible because anonymous access has been disallowed for the storage account, then you'll see an error message indicating that anonymous access isn't permitted on this storage account.
The following example shows how to use PowerShell to attempt to download a blob via its URL. Remember to replace the placeholder values in brackets with your own values:
$downloadTo = "<file-path-for-download>"
Invoke-WebRequest -Uri $url -OutFile $downloadTo -ErrorAction Stop ```
-### Verify that modifying the container's public access setting isn't permitted
+### Verify that modifying the container's access setting isn't permitted
-To verify that a container's public access setting can't be modified after public access is disallowed for the storage account, you can attempt to modify the setting. Changing the container's public access setting fails if public access is disallowed for the storage account.
+To verify that a container's access setting can't be modified after anonymous access is disallowed for the storage account, you can attempt to modify the setting. Changing the container's access setting fails if anonymous access is disallowed for the storage account.
-The following example shows how to use PowerShell to attempt to change a container's public access setting. Remember to replace the placeholder values in brackets with your own values:
+The following example shows how to use PowerShell to attempt to change a container's access setting. Remember to replace the placeholder values in brackets with your own values:
```powershell $rgName = "<resource-group>"
$ctx = $storageAccount.Context
Set-AzStorageContainerAcl -Context $ctx -Container $containerName -Permission Blob ```
-### Verify that creating a container with public access enabled isn't permitted
+### Verify that a container can't be created with anonymous access enabled
-If public access is disallowed for the storage account, then you won't be able to create a new container with public access enabled. To verify, you can attempt to create a container with public access enabled.
+If anonymous access is disallowed for the storage account, then you won't be able to create a new container with anonymous access enabled. To verify, you can attempt to create a container with anonymous access enabled.
-The following example shows how to use PowerShell to attempt to create a container with public access enabled. Remember to replace the placeholder values in brackets with your own values:
+The following example shows how to use PowerShell to attempt to create a container with anonymous access enabled. Remember to replace the placeholder values in brackets with your own values:
```powershell $rgName = "<resource-group>"
$ctx = $storageAccount.Context
New-AzStorageContainer -Name $containerName -Permission Blob -Context $ctx ```
-### Check the public access setting for multiple accounts
-
-To check the public access setting across a set of storage accounts with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
+### Check the anonymous access setting for multiple accounts
-The **AllowBlobPublicAccess** property isn't set for a storage account by default and doesn't return a value until you explicitly set it. The storage account permits public access when the property value is either **null** or **true**.
+To check the anonymous access setting across a set of storage accounts with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
-Running the following query in the Resource Graph Explorer returns a list of storage accounts and displays public access setting for each account:
+Running the following query in the Resource Graph Explorer returns a list of storage accounts and displays anonymous access setting for each account:
```kusto resources
resources
The following image shows the results of a query across a subscription. For storage accounts where the **AllowBlobPublicAccess** property has been explicitly set, it appears in the results as **true** or **false**. If the **AllowBlobPublicAccess** property hasn't been set for a storage account, it appears as blank (or **null**) in the query results. ## Use Azure Policy to audit for compliance
-If you have a large number of storage accounts, you may want to perform an audit to make sure that those accounts are configured to prevent public access. To audit a set of storage accounts for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../../governance/policy/overview.md).
+If you have a large number of storage accounts, you may want to perform an audit to make sure that those accounts are configured to prevent anonymous access. To audit a set of storage accounts for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../../governance/policy/overview.md).
### Create a policy with an Audit effect Azure Policy supports effects that determine what happens when a policy rule is evaluated against a resource. The Audit effect creates a warning when a resource isn't in compliance, but doesn't stop the request. For more information about effects, see [Understand Azure Policy effects](../../governance/policy/concepts/effects.md).
-To create a policy with an Audit effect for the public access setting for a storage account with the Azure portal, follow these steps:
+To create a policy with an Audit effect for the anonymous access setting for a storage account with the Azure portal, follow these steps:
1. In the Azure portal, navigate to the Azure Policy service. 1. Under the **Authoring** section, select **Definitions**.
To view the compliance report in the Azure portal, follow these steps:
1. Filter the results for the name of the policy assignment that you created in the previous step. The report shows how many resources aren't in compliance with the policy. 1. You can drill down into the report for additional details, including a list of storage accounts that aren't in compliance.
- :::image type="content" source="media/anonymous-read-access-prevent/compliance-report-policy-portal.png" alt-text="Screenshot showing compliance report for audit policy for blob public access":::
+ :::image type="content" source="media/anonymous-read-access-prevent/compliance-report-policy-portal.png" alt-text="Screenshot showing compliance report for audit policy for anonymous access":::
## Use Azure Policy to enforce authorized access
-Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and standards. To ensure that storage accounts in your organization permit only authorized requests, you can create a policy that prevents the creation of a new storage account with a public access setting that allows anonymous requests. This policy will also prevent all configuration changes to an existing account if the public access setting for that account isn't compliant with the policy.
+Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and standards. To ensure that storage accounts in your organization permit only authorized requests, you can create a policy that prevents the creation of a new storage account with an anonymous access setting that allows anonymous requests. This policy will also prevent all configuration changes to an existing account if the anonymous access setting for that account isn't compliant with the policy.
-The enforcement policy uses the Deny effect to prevent a request that would create or modify a storage account to allow public access. For more information about effects, see [Understand Azure Policy effects](../../governance/policy/concepts/effects.md).
+The enforcement policy uses the Deny effect to prevent a request that would create or modify a storage account to allow anonymous access. For more information about effects, see [Understand Azure Policy effects](../../governance/policy/concepts/effects.md).
-To create a policy with a Deny effect for a public access setting that allows anonymous requests, follow the same steps described in [Use Azure Policy to audit for compliance](#use-azure-policy-to-audit-for-compliance), but provide the following JSON in the **policyRule** section of the policy definition:
+To create a policy with a Deny effect for an anonymous access setting that allows anonymous requests, follow the same steps described in [Use Azure Policy to audit for compliance](#use-azure-policy-to-audit-for-compliance), but provide the following JSON in the **policyRule** section of the policy definition:
```json {
To create a policy with a Deny effect for a public access setting that allows an
} ```
-After you create the policy with the Deny effect and assign it to a scope, a user can't create a storage account that allows public access. Nor can a user make any configuration changes to an existing storage account that currently allows public access. Attempting to do so results in an error. The public access setting for the storage account must be set to **false** to proceed with account creation or configuration.
+After you create the policy with the Deny effect and assign it to a scope, a user can't create a storage account that allows anonymous access. Nor can a user make any configuration changes to an existing storage account that currently allows anonymous access. Attempting to do so results in an error. The anonymous access setting for the storage account must be set to **false** to proceed with account creation or configuration.
-The following image shows the error that occurs if you try to create a storage account that allows public access (the default for a new account) when a policy with a Deny effect requires that public access is disallowed.
+The following image shows the error that occurs if you try to create a storage account that allows anonymous access (the default for a new account) when a policy with a Deny effect requires that anonymous access is disallowed.
:::image type="content" source="media/anonymous-read-access-prevent/deny-policy-error.png" alt-text="Screenshot showing the error that occurs when creating a storage account in violation of policy"::: ## Next steps -- [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md)-- [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
+- [Overview: Remediating anonymous read access for blob data](anonymous-read-access-overview.md)
+- [Remediate anonymous read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
- [Security recommendations for Blob storage](security-recommendations.md)
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
The following example shows how to create a container in a new storage account f
-Name "<storage-account>" ` -SkuName Standard_LRS ` -Location $location `
+ -AllowBlobPublicAccess $false
``` 1. Get the storage account context that specifies the new storage account by calling [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext). When acting on a storage account, you can reference the context instead of repeatedly passing in the credentials. Include the `-UseConnectedAccount` parameter to call any subsequent data operations using your Azure AD credentials:
storage Blob Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-portal.md
Previously updated : 07/18/2022 Last updated : 06/26/2023
To create a container in the [Azure portal](https://portal.azure.com), follow th
1. In the navigation pane for the storage account, scroll to the **Data storage** section and select **Containers**. 1. Within the **Containers** pane, select the **+ Container** button to open the **New container** pane. 1. Within the **New Container** pane, provide a **Name** for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. The name must also be between 3 and 63 characters long. For more information about container and blob names, see [Naming and referencing containers, blobs, and metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-1. Set the **Public access level** for the container. The recommended level is **Private (no anonymous access)**. For information about preventing anonymous public access to blob data, see [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md).
+1. Set the **Anonymous access level** for the container. The recommended level is **Private (no anonymous access)**. For information about preventing anonymous access to blob data, see [Overview: Remediating anonymous read access for blob data](anonymous-read-access-overview.md).
1. Select **Create** to create the container. :::image type="content" source="media/blob-containers-portal/create-container-sml.png" alt-text="Screenshot showing how to create a container within the Azure portal." lightbox="media/blob-containers-portal/create-container-lrg.png":::
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
When you have the correct OID for the service principal, go to the Storage Explo
No. A container does not have an ACL. However, you can set the ACL of the container's root directory. Every container has a root directory, and it shares the same name as the container. For example, if the container is named `my-container`, then the root directory is named `my-container/`.
-The Azure Storage REST API does contain an operation named [Set Container ACL](/rest/api/storageservices/set-container-acl), but that operation cannot be used to set the ACL of a container or the root directory of a container. Instead, that operation is used to indicate whether blobs in a container may be accessed with an anonymous request. We recommend requiring authorization for all requests to blob data. For more information, see [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md).
+The Azure Storage REST API does contain an operation named [Set Container ACL](/rest/api/storageservices/set-container-acl), but that operation cannot be used to set the ACL of a container or the root directory of a container. Instead, that operation is used to indicate whether blobs in a container may be accessed with an anonymous request. We recommend requiring authorization for all requests to blob data. For more information, see [Overview: Remediating anonymous read access for blob data](anonymous-read-access-overview.md).
### Where can I learn more about POSIX access control model?
storage Object Replication Prevent Cross Tenant Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-prevent-cross-tenant-policies.md
$location = "<location>"
New-AzStorageAccount -ResourceGroupName $rgName ` -Name $accountName ` -Location $location `
- -SkuName Standard_LRS
+ -SkuName Standard_LRS `
+ -AllowBlobPublicAccess $false `
-AllowCrossTenantReplication $false # Read the property for the new storage account
az storage account create \
--name <storage-account> \ --resource-group <resource-group> \ --location <location> \
- --sku Standard_LRS
+ --sku Standard_LRS \
+ --allow-blob-public-access false \
--allow-cross-tenant-replication false # Read the property for the new storage account
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Previously updated : 04/06/2023 Last updated : 09/12/2023
Microsoft Defender for Cloud periodically analyzes the security state of your Az
| Keep in mind the principle of least privilege when assigning permissions to a SAS | When creating a SAS, specify only those permissions that are required by the client to perform its function. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - | | Have a revocation plan in place for any SAS that you issue to clients | If a SAS is compromised, you will want to revoke that SAS as soon as possible. To revoke a user delegation SAS, revoke the user delegation key to quickly invalidate all signatures associated with that key. To revoke a service SAS that is associated with a stored access policy, you can delete the stored access policy, rename the policy, or change its expiry time to a time that is in the past. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md). | - | | If a service SAS is not associated with a stored access policy, then set the expiry time to one hour or less | A service SAS that is not associated with a stored access policy cannot be revoked. For this reason, limiting the expiry time so that the SAS is valid for one hour or less is recommended. | - |
-| Disable anonymous public read access to containers and blobs | Anonymous public read access to a container and its blobs grants read-only access to those resources to any client. Avoid enabling public read access unless your scenario requires it. To learn how to disable anonymous public access for a storage account, see [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md). | - |
+| Disable anonymous read access to containers and blobs | anonymous read access to a container and its blobs grants read-only access to those resources to any client. Avoid enabling anonymous read access unless your scenario requires it. To learn how to disable anonymous access for a storage account, see [Overview: Remediating anonymous read access for blob data](anonymous-read-access-overview.md). | - |
## Networking
storage Static Website Content Delivery Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/static-website-content-delivery-network.md
You can enable Azure CDN for your static website directly from your storage acco
If you no longer want to cache an object in Azure CDN, you can take one of the following steps: -- Make the container private instead of public. For more information, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
+- Make the container private instead of public. For more information, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
- Disable or delete the CDN endpoint by using the Azure portal. - Modify your hosted service to no longer respond to requests for the object.
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
Previously updated : 03/15/2023 Last updated : 06/26/2023 # Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI
You can authorize access to Blob storage from the Azure CLI either with Azure AD
1. Use [az storage account](/cli/azure/storage/account) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
-1. Use [az storage container](/cli/azure/storage/container) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**.
+1. Use [az storage container](/cli/azure/storage/container) to create a new blob container within the storage account and set the anonymous access level to **Private (no anonymous access)**.
1. Use [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) to upload a text file to the container.
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
Here is what the condition looks like in code:
1. Create a storage account that is compatible with the blob index tags feature. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
-1. Create a new container within the storage account and set the Public access level to **Private (no anonymous access)**.
+1. Create a new container within the storage account and set the anonymous access level to **Private (no anonymous access)**.
1. In the container, click **Upload** to open the Upload blob pane.
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
Here is what the condition looks like in code:
1. Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
-1. Use [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**.
+1. Use [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new blob container within the storage account and set the anonymous access level to **Private (no anonymous access)**.
1. Use [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) to upload a text file to the container.
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
The following example creates a container asynchronously from the BlobServiceCli
```javascript async function createContainer(blobServiceClient, containerName){
- // public access at container level
+ // anonymous access at container level
const options = { access: 'container' };
storage Storage Blob Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart-powershell.md
$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroup `
-Location $location ` -SkuName Standard_LRS ` -Kind BlobStorage `
- -AccessTier Hot
+ -AccessTier Hot `
+ -AllowBlobPublicAccess $false
$ctx = $storageAccount.Context ```
storage Storage Blob Scalable App Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-create-vm.md
$storageAccount = New-AzStorageAccount -ResourceGroupName myResourceGroup `
-Location EastUS ` -SkuName Standard_LRS ` -Kind Storage `
+ -AllowBlobPublicAccess $false
``` ## Create a virtual machine
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
If you set up [redundancy in a secondary region](../common/storage-redundancy.md
## Impact of setting the access level on the web container
-You can modify the public access level of the **$web** container, but making this modification has no impact on the primary static website endpoint because these files are served through anonymous access requests. That means public (read-only) access to all files.
+You can modify the anonymous access level of the **$web** container, but making this modification has no impact on the primary static website endpoint because these files are served through anonymous access requests. That means public (read-only) access to all files.
-While the primary static website endpoint isn't affected, a change to the public access level does impact the primary blob service endpoint.
+While the primary static website endpoint isn't affected, a change to the anonymous access level does impact the primary blob service endpoint.
-For example, if you change the public access level of the **$web** container from **Private (no anonymous access)** to **Blob (anonymous read access for blobs only)**, then the level of public access to the primary static website endpoint `https://contosoblobaccount.z22.web.core.windows.net/https://docsupdatetracker.net/index.html` doesn't change.
+For example, if you change the anonymous access level of the **$web** container from **Private (no anonymous access)** to **Blob (anonymous read access for blobs only)**, then the level of anonymous access to the primary static website endpoint `https://contosoblobaccount.z22.web.core.windows.net/https://docsupdatetracker.net/index.html` doesn't change.
-However, the public access to the primary blob service endpoint `https://contosoblobaccount.blob.core.windows.net/$web/https://docsupdatetracker.net/index.html` does change from private to public. Now users can open that file by using either of these two endpoints.
+However, anonymous access to the primary blob service endpoint `https://contosoblobaccount.blob.core.windows.net/$web/https://docsupdatetracker.net/index.html` does change, enabling users to open that file by using either of these two endpoints.
-Disabling public access on a storage account by using the [public access setting](anonymous-read-access-prevent.md#set-the-storage-accounts-allowblobpublicaccess-property-to-false) of the storage account doesn't affect static websites that are hosted in that storage account. For more information, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
+Disabling anonymous access on a storage account by using the [anonymous access setting](anonymous-read-access-prevent.md#set-the-storage-accounts-allowblobpublicaccess-property-to-false) of the storage account doesn't affect static websites that are hosted in that storage account. For more information, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
## Mapping a custom domain to a static website URL
To enable metrics on your static website pages, see [Enable metrics on static we
## Frequently asked questions (FAQ)
-See [Static website hosting FAQ](storage-blob-faq.yml#static-website-hosting).
+##### Does the Azure Storage firewall work with a static website?
+
+Yes. Storage account [network security rules](../common/storage-network-security.md), including IP-based and VNET firewalls, are supported for the static website endpoint, and may be used to protect your website.
+
+##### Do static websites support Azure Active Directory (Azure AD)?
+
+No. A static website only supports anonymous read access for files in the **$web** container.
+
+##### How do I use a custom domain with a static website?
+
+You can configure a [custom domain](./static-website-content-delivery-network.md) with a static website by using [Azure Content Delivery Network (Azure CDN)](./storage-custom-domain-name.md#map-a-custom-domain-with-https-enabled). Azure CDN provides consistent low latencies to your website from anywhere in the world.
+
+##### How do I use a custom Secure Sockets Layer (SSL) certificate with a static website?
+
+You can configure a [custom SSL](./static-website-content-delivery-network.md) certificate with a static website by using [Azure CDN](./storage-custom-domain-name.md#map-a-custom-domain-with-https-enabled). Azure CDN provides consistent low latencies to your website from anywhere in the world.
+
+##### How do I add custom headers and rules with a static website?
+
+You can configure the host header for a static website by using [Azure CDN - Verizon Premium](../../cdn/cdn-verizon-premium-rules-engine.md). We'd be interested to hear your feedback [here](https://feedback.azure.com/d365community/idea/694b08ef-3525-ec11-b6e6-000d3a4f0f84).
+
+##### Why am I getting an HTTP 404 error from a static website?
+
+A 404 error can happen if you refer to a file name by using an incorrect case. For example: `Index.html` instead of `https://docsupdatetracker.net/index.html`. File names and extensions in the url of a static website are case-sensitive even though they're served over HTTP. This can also happen if your Azure CDN endpoint isn't yet provisioned. Wait up to 90 minutes after you provision a new Azure CDN for the propagation to complete.
+
+##### Why isn't the root directory of the website not redirecting to the default index page?
+
+In the Azure portal, open the static website configuration page of your account and locate the name and extension that is set in the **Index document name** field. Ensure that this name is exactly the same as the name of the file located in the **$web** container of the storage account. File names and extensions in the url of a static website are case-sensitive even though they're served over HTTP.
## Next steps
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Prevent anonymous public access](anonymous-read-access-prevent.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
+| [Prevent anonymous read access](anonymous-read-access-prevent.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
| [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
The following table describes whether a feature is supported in a premium block
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Prevent anonymous public access](anonymous-read-access-prevent.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Prevent anonymous read access](anonymous-read-access-prevent.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
storage Storage Quickstart Blobs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-portal.md
Previously updated : 01/13/2023 Last updated : 06/26/2023
To create a container in the Azure portal, follow these steps:
1. In the left menu for the storage account, scroll to the **Data storage** section, then select **Containers**. 1. Select the **+ Container** button. 1. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see [Naming and referencing containers, blobs, and metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-1. Set the level of public access to the container. The default level is **Private (no anonymous access)**.
+1. Set the level of anonymous access to the container. The default level is **Private (no anonymous access)**.
1. Select **Create** to create the container. :::image type="content" source="media/storage-quickstart-blobs-portal/create-container-sml.png" alt-text="Screenshot showing how to create a container in the Azure portal" lightbox="media/storage-quickstart-blobs-portal/create-container-lrg.png":::
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
Previously updated : 03/31/2022 Last updated : 06/26/2023
This quickstart requires the Azure PowerShell module Az version 0.7 or later. Ru
Blobs are always uploaded into a container. You can organize groups of blobs like the way you organize your files on your computer in folders.
-Set the container name, then create the container by using [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer). Set the permissions to `blob` to allow public access of the files. The container name in this example is *quickstartblobs*.
+Set the container name, then create the container by using [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer). The container name in this example is *quickstartblobs*.
```azurepowershell-interactive $ContainerName = 'quickstartblobs'
-New-AzStorageContainer -Name $ContainerName -Context $Context -Permission Blob
+New-AzStorageContainer -Name $ContainerName -Context $Context
``` ## Upload blobs to the container
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Each time you access data in your storage account, your client application makes
The following table describes the options that Azure Storage offers for authorizing access to data:
-| Azure artifact | Shared Key (storage account key) | Shared access signature (SAS) | Azure Active Directory (Azure AD) | On-premises Active Directory Domain Services | Anonymous public read access | Storage Local Users |
+| Azure artifact | Shared Key (storage account key) | Shared access signature (SAS) | Azure Active Directory (Azure AD) | On-premises Active Directory Domain Services | anonymous read access | Storage Local Users |
|--|--|--|--|--|--|--| | Azure Blobs | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../blobs/authorize-access-azure-active-directory.md) | Not supported | [Supported but not recommended](../blobs/anonymous-read-access-overview.md) | [Supported, only for SFTP](../blobs/secure-file-transfer-protocol-support-how-to.md) | | Azure Files (SMB) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | Not supported | Supported, only with [Azure AD Domain Services](../files/storage-files-identity-auth-active-directory-domain-service-enable.md) for cloud-only or [Azure AD Kerberos](../files/storage-files-identity-auth-azure-active-directory-enable.md) for hybrid identities | [Supported, credentials must be synced to Azure AD](../files/storage-files-active-directory-overview.md) | Not supported | Not supported |
Each authorization option is briefly described below:
- **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over SMB through AD DS. Your AD DS environment can be hosted in on-premises machines or in Azure VMs. SMB access to Files is supported using AD DS credentials from domain joined machines, either on-premises or in Azure. You can use a combination of Azure RBAC for share level access control and NTFS DACLs for directory/file level permission enforcement. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md). -- **Anonymous public read access** for blob data is supported, but not recommended. When anonymous access is configured, clients can read blob data without authorization. We recommend that you disable anonymous access for all of your storage accounts. For more information, see [Overview: Remediating anonymous public read access for blob data](../blobs/anonymous-read-access-overview.md).
+- **anonymous read access** for blob data is supported, but not recommended. When anonymous access is configured, clients can read blob data without authorization. We recommend that you disable anonymous access for all of your storage accounts. For more information, see [Overview: Remediating anonymous read access for blob data](../blobs/anonymous-read-access-overview.md).
- **Storage Local Users** can be used to access blobs with SFTP or files with SMB. Storage Local Users support container level permissions for authorization. See [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md) for more information on how Storage Local Users can be used with SFTP.
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
New-AzStorageAccount -ResourceGroupName $rgName `
-Kind StorageV2 ` -SkuName Standard_LRS ` -Location $location `
+ -AllowBlobPublicAccess $false `
-UserAssignedIdentityId $userIdentity.Id ` -IdentityType SystemAssignedUserAssigned ` -KeyName $keyName `
az storage account create \
--location $isvLocation \ --sku Standard_LRS \ --kind StorageV2 \
+ --allow-blob-public-access false \
--identity-type SystemAssigned,UserAssigned \ --user-identity-id $identityResourceId \ --encryption-key-vault $kvUri \
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
New-AzStorageAccount -ResourceGroupName $rgName `
-Kind StorageV2 ` -SkuName Standard_LRS ` -Location $location `
+ -AllowBlobPublicAccess $false `
-IdentityType SystemAssignedUserAssigned ` -UserAssignedIdentityId $userIdentity.Id ` -KeyVaultUri $keyVault.VaultUri `
New-AzStorageAccount -ResourceGroupName $rgName `
-Kind StorageV2 ` -SkuName Standard_LRS ` -Location $location `
+ -AllowBlobPublicAccess $false `
-IdentityType SystemAssignedUserAssigned ` -UserAssignedIdentityId $userIdentity.Id ` -KeyVaultUri $keyVault.VaultUri `
az storage account create \
--location $location \ --sku Standard_LRS \ --kind StorageV2 \
+ --allow-blob-public-access false \
--identity-type SystemAssigned,UserAssigned \ --user-identity-id $identityResourceId \ --encryption-key-vault $keyVaultUri \
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
New-AzStorageAccount -ResourceGroupName <resource_group> `
-Location <location> ` -SkuName "Standard_RAGRS" ` -Kind StorageV2 `
+ -AllowBlobPublicAccess $false `
-RequireInfrastructureEncryption ```
az storage account create \
--location <location> \ --sku Standard_RAGRS \ --kind StorageV2 \
+ --allow-blob-public-access false \
--require-infrastructure-encryption ```
storage Security Restrict Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-restrict-copy-operations.md
az storage account update \
## Next steps - [Require secure transfer to ensure secure connections](storage-require-secure-transfer.md)-- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](../blobs/anonymous-read-access-prevent.md)
+- [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](../blobs/anonymous-read-access-prevent.md)
- [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md)
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
az storage container create \
``` > [!NOTE]
-> Anonymous requests are not authorized and will proceed if you have configured the storage account and container for anonymous public read access. For more information, see [Configure anonymous public read access for containers and blobs](../blobs/anonymous-read-access-configure.md).
+> Anonymous requests are not authorized and will proceed if you have configured the storage account and container for anonymous read access. For more information, see [Configure anonymous read access for containers and blobs](../blobs/anonymous-read-access-configure.md).
## Monitor the Azure Policy for compliance
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Previously updated : 08/18/2023 Last updated : 09/12/2023 -+ # Create a storage account
The following table describes the fields on the **Advanced** tab.
| Section | Field | Required or optional | Description | |--|--|--|--| | Security | Require secure transfer for REST API operations | Optional | Require secure transfer to ensure that incoming requests to this storage account are made only via HTTPS (default). Recommended for optimal security. For more information, see [Require secure transfer to ensure secure connections](storage-require-secure-transfer.md). |
-| Security | Allow enabling public access on containers | Optional | When enabled, this setting allows a user with the appropriate permissions to enable anonymous public access to a container in the storage account (default). Disabling this setting prevents all anonymous public access to the storage account. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).<br> <br> Enabling blob public access does not make blob data available for public access unless the user takes the additional step to explicitly configure the container's public access setting. |
+| Security | Allow enabling anonymous access on individual containers | Optional | When enabled, this setting allows a user with the appropriate permissions to enable anonymous access to a container in the storage account (default). Disabling this setting prevents all anonymous access to the storage account. Microsoft recommends disabling this setting for optimal security.<br/> <br/> For more information, see [Prevent anonymous read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).<br/> <br/> Enabling anonymous access does not make blob data available for anonymous access unless the user takes the additional step to explicitly configure the container's anonymous access setting. |
| Security | Enable storage account key access | Optional | When enabled, this setting allows clients to authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). Disabling this setting prevents authorization with the account access keys. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md). | | Security | Default to Azure Active Directory authorization in the Azure portal | Optional | When enabled, the Azure portal authorizes data operations with the user's Azure AD credentials by default. If the user does not have the appropriate permissions assigned via Azure role-based access control (Azure RBAC) to perform data operations, then the portal will use the account access keys for data access instead. The user can also choose to switch to using the account access keys. For more information, see [Default to Azure AD authorization in the Azure portal](../blobs/authorize-data-operations-portal.md#default-to-azure-ad-authorization-in-the-azure-portal). | | Security | Minimum TLS version | Required | Select the minimum version of Transport Layer Security (TLS) for incoming requests to the storage account. The default value is TLS version 1.2. When set to the default value, incoming requests made using TLS 1.0 or TLS 1.1 are rejected. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](transport-layer-security-configure-minimum-version.md). |
New-AzStorageAccount -ResourceGroupName $resourceGroup `
-Name <account-name> ` -Location $location ` -SkuName Standard_RAGRS `
- -Kind StorageV2
+ -Kind StorageV2 `
+ -AllowBlobPublicAccess $false
``` To create an account with Azure DNS zone endpoints (preview), follow these steps:
$account = New-AzStorageAccount -ResourceGroupName $rgName `
-SkuName Standard_RAGRS ` -Location <location> ` -Kind StorageV2 `
+ -AllowBlobPublicAccess $false `
-DnsEndpointType AzureDnsZone $account.PrimaryEndpoints
az storage account create \
--resource-group storage-resource-group \ --location eastus \ --sku Standard_RAGRS \
- --kind StorageV2
+ --kind StorageV2 \
+ --allow-blob-public-access false
``` To create an account with Azure DNS zone endpoints (preview), first register for the preview as described in [Azure DNS zone endpoints (preview)](storage-account-overview.md#azure-dns-zone-endpoints-preview). Next, install the preview extension for the Azure CLI if it's not already installed:
storage Storage Explorer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-security.md
If you must use keys to access your storage resources, we recommend the followin
> [!NOTE] > If you believe a storage account key has been shared or distributed by mistake, you can generate new keys for your storage account from the Azure portal.
-### Public access to blob containers
+### anonymous access to blob containers
Storage Explorer allows you to modify the access level of your Azure Blob Storage containers. Non-private blob containers allow anyone anonymous read access to data in those containers.
-When enabling public access for a blob container, we recommend the following guidelines:
+When enabling anonymous access for a blob container, we recommend the following guidelines:
-- **Don't enable public access to a blob container that may contain any potentially sensitive data.** Make sure your blob container is free of all private data.
+- **Don't enable anonymous access to a blob container that may contain any potentially sensitive data.** Make sure your blob container is free of all private data.
- **Don't upload any potentially sensitive data to a blob container with Blob or Container access.** ## Next steps
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
# Configure Azure Storage firewalls and virtual networks
-Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments require. In this article, you will learn how to configure the Azure Storage firewall to protect the data in your storage account at the network layer.
+Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources that you use.
+
+When you configure network rules, only applications that request data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests that come from specified IP addresses, IP ranges, subnets in an Azure virtual network, or resource instances of some Azure services.
+
+Storage accounts have a public endpoint that's accessible through the internet. You can also create [private endpoints for your storage account](storage-private-endpoints.md). Creating private endpoints assigns a private IP address from your virtual network to the storage account. It helps secure traffic between your virtual network and the storage account over a private link.
+
+The Azure Storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when you're using private endpoints. Your firewall configuration also enables trusted Azure platform services to access the storage account.
+
+An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. When you configure a blob container for anonymous access, requests to read data in that container don't need to be authorized. The firewall rules remain in effect and will block anonymous traffic.
+
+Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service that operates within an Azure virtual network or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
+
+You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up.
++
+## Scenarios
+
+To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific virtual networks. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration helps you build a secure network boundary for your applications.
+
+You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. You can apply storage firewall rules to existing storage accounts or when you create new storage accounts.
+
+Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
> [!IMPORTANT] > Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
storage Storage Use Azcopy Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md
description: You can provide authorization credentials for AzCopy operations by
Previously updated : 09/05/2023 Last updated : 09/29/2023
Start by verifying your role assignments. Then, choose what type of *security pr
A user identity is any user that has an identity in Azure AD. It's the easiest security principal to authorize. Managed identities and service principals are great options if you plan to use AzCopy inside of a script that runs without user interaction. A managed identity is better suited for scripts that run from an Azure Virtual Machine (VM), and a service principal is better suited for scripts that run on-premises.
+To authorize access, you'll set in-memory environment variables. Then run any AzCopy command. AzCopy will retrieve the Auth token required to complete the operation. After the operation completes, the token disappears from memory.
+ For more information about AzCopy, [Get started with AzCopy](storage-use-azcopy-v10.md). ## Verify role assignments
You don't need to have one of these roles assigned to your security principal if
To learn more, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
-## Authorize a user identity
-
-After you've verified that your user identity has been given the necessary authorization level, open a command prompt, type the following command, and then press the ENTER key.
+<a id="authorize-without-a-secret-store"></a>
-```azcopy
-azcopy login
-```
+### Authorize a user identity
-If you receive an error, try including the tenant ID of the organization to which the storage account belongs.
+After you've verified that your user identity has been given the necessary authorization level, type the following command, and then press the ENTER key.
-```azcopy
-azcopy login --tenant-id=<tenant-id>
+```bash
+export AZCOPY_AUTO_LOGIN_TYPE=DEVICE
```-
-Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
This command returns an authentication code and the URL of a website. Open the website, provide the code, and then choose the **Next** button. ![Create a container](media/storage-use-azcopy-v10/azcopy-login.png)
-A sign-in window will appear. In that window, sign into your Azure account by using your Azure account credentials. After you've successfully signed in, you can close the browser window and begin using AzCopy.
+A sign-in window will appear. In that window, sign into your Azure account by using your Azure account credentials. After you've successfully signed in, the operation can complete.
<a id="managed-identity"></a>
You can sign into your account by using a system-wide managed identity that you'
To learn more about how to enable a system-wide managed identity or create a user-assigned managed identity, see [Configure managed identities for Azure resources on a VM using the Azure portal](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm).
-#### Authorize by using a system-wide managed identity
+### Authorize by using a system-wide managed identity
First, make sure that you've enabled a system-wide managed identity on your VM. See [System-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity).
-Then, in your command console, type the following command, and then press the ENTER key.
+Type the following command, and then press the ENTER key.
-```azcopy
-azcopy login --identity
+```bash
+export AZCOPY_AUTO_LOGIN_TYPE=MSI
```
-#### Authorize by using a user-assigned managed identity
+Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
+
+### Authorize by using a user-assigned managed identity
First, make sure that you've enabled a user-assigned managed identity on your VM. See [User-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#user-assigned-managed-identity).
-Then, in your command console, type any of the following commands, and then press the ENTER key.
+Type the following command, and then press the ENTER key.
-```azcopy
-azcopy login --identity --identity-client-id "<client-id>"
+```bash
+export AZCOPY_AUTO_LOGIN_TYPE=MSI
+```
+
+Then, type any of the following commands, and then press the ENTER key.
+
+```bash
+export AZCOPY_MSI_CLIENT_ID=<client-id>
``` Replace the `<client-id>` placeholder with the client ID of the user-assigned managed identity.
-```azcopy
-azcopy login --identity --identity-object-id "<object-id>"
+```bash
+export AZCOPY_MSI_OBJECT_ID=<object-id>
``` Replace the `<object-id>` placeholder with the object ID of the user-assigned managed identity.
-```azcopy
-azcopy login --identity --identity-resource-id "<resource-id>"
+```bash
+export AZCOPY_MSI_RESOURCE_STRING=<resource-id>
``` Replace the `<resource-id>` placeholder with the resource ID of the user-assigned managed identity.
+After you set these variables, you can run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
+ <a id="service-principal"></a>
-## Authorize a service principal
+### Authorize a service principal
This is a great option if you plan to use AzCopy inside of a script that runs without user interaction, particularly when running on-premises. If you plan to run AzCopy on VMs that run in Azure, a managed service identity is easier to administer. To learn more, see the [Authorize a managed identity](#authorize-a-managed-identity) section of this article.
-Before you run a script, you have to sign in interactively at least one time so that you can provide AzCopy with the credentials of your service principal. Those credentials are stored in a secured and encrypted file so that your script doesn't have to provide that sensitive information.
- You can sign into your account by using a client secret or by using the password of a certificate that is associated with your service principal's app registration. To learn more about creating service principal, see [How to: Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
To learn more about service principals in general, see [Application and service
#### Authorize a service principal by using a client secret
-Start by setting the `AZCOPY_SPA_CLIENT_SECRET` environment variable to the client secret of your service principal's app registration.
-
-> [!NOTE]
-> Make sure to set this value from your command prompt, and not in the environment variable settings of your operating system. That way, the value is available only to the current session.
-
-This example shows how you could do this in PowerShell.
+Type the following command, and then press the ENTER key.
-```azcopy
-$env:AZCOPY_SPA_CLIENT_SECRET="$(Read-Host -prompt "Enter key")"
+```bash
+export AZCOPY_AUTO_LOGIN_TYPE=SPN
+export AZCOPY_SPA_APPLICATION_ID=<application-id>
+export AZCOPY_SPA_CLIENT_SECRET=<client-secret>
+export AZCOPY_TENANT_ID=<tenant-id>
```
-> [!NOTE]
-> Consider using a prompt as shown in this example. That way, your password won't appear in your console's command history.
-
-Next, type the following command, and then press the ENTER key.
+Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<client-secret>` placeholder with the client secret. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
-```azcopy
-azcopy login --service-principal --application-id application-id --tenant-id=tenant-id
-```
+> [!NOTE]
+> Consider using a prompt to collect the password from the user. That way, your password won't appear in your command history.
-Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
#### Authorize a service principal by using a certificate
If you prefer to use your own credentials for authorization, you can upload a ce
In addition to uploading your certificate to your app registration, you'll also need to have a copy of the certificate saved to the machine or VM where AzCopy will be running. This copy of the certificate should be in .PFX or .PEM format, and must include the private key. The private key should be password-protected. If you're using Windows, and your certificate exists only in a certificate store, make sure to export that certificate to a PFX file (including the private key). For guidance, see [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate)
-Next, set the `AZCOPY_SPA_CERT_PASSWORD` environment variable to the certificate password.
+Type the following command, and then press the ENTER key.
-> [!NOTE]
-> Make sure to set this value from your command prompt, and not in the environment variable settings of your operating system. That way, the value is available only to the current session.
+```bash
+export AZCOPY_AUTO_LOGIN_TYPE=SPN
+export AZCOPY_SPA_APPLICATION_ID=<application-id>
+export AZCOPY_SPA_CERT_PATH=<path-to-certificate-file>
+export AZCOPY_SPA_CERT_PASSWORD=<certificate-password>
+export AZCOPY_TENANT_ID=<tenant-id>
+```
-This example shows how you could do this task in PowerShell.
+Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<certificate-password>` placeholder with the password of the certificate. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
-```azcopy
-$env:AZCOPY_SPA_CERT_PASSWORD="$(Read-Host -prompt "Enter key")"
-```
+> [!NOTE]
+> Consider using a prompt to collect the password from the user. That way, your password won't appear in your command history.
-Next, type the following command, and then press the ENTER key.
+Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
-```azcopy
-azcopy login --service-principal --application-id application-id --certificate-path <path-to-certificate-file> --tenant-id=<tenant-id>
-```
+## Authorize by using the AzCopy login command
-Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+As an alternative to using in-memory variables, you authorize access by using the azcopy login command. However, this approach is not recommended as the azcopy login command will soon be deprecated.
-> [!NOTE]
-> Consider using a prompt as shown in this example. That way, your password won't appear in your console's command history.
+The azcopy login command retrieves an OAuth token and then places that token into a secret store on your system. If your operating system doesn't have a secret store such as a Linux keyring, the azcopy login command won't work because there is nowhere to place the token.
-## Authorize without a secret store
+> [!IMPORTANT]
+> The azcopy login command will soon be deprecated.
-The `azcopy login` command retrieves an OAuth token and then places that token into a secret store on your system. If your operating system doesn't have a secret store such as a Linux *keyring*, the `azcopy login` command won't work because there is nowhere to place the token.
+### Authorize a user identity (azcopy login command)
-Instead of using the `azcopy login` command, you can set in-memory environment variables. Then run any AzCopy command. AzCopy will retrieve the Auth token required to complete the operation. After the operation completes, the token disappears from memory.
+After you've verified that your user identity has been given the necessary authorization level, open a command prompt, type the following command, and then press the ENTER key.
-### Authorize a user identity
+```azcopy
+azcopy login
+```
-After you've verified that your user identity has been given the necessary authorization level, type the following command, and then press the ENTER key.
+If you receive an error, try including the tenant ID of the organization to which the storage account belongs.
-```bash
-export AZCOPY_AUTO_LOGIN_TYPE=DEVICE
+```azcopy
+azcopy login --tenant-id=<tenant-id>
```
-Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
+Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
This command returns an authentication code and the URL of a website. Open the website, provide the code, and then choose the **Next** button. ![Create a container](media/storage-use-azcopy-v10/azcopy-login.png)
-A sign-in window will appear. In that window, sign into your Azure account by using your Azure account credentials. After you've successfully signed in, the operation can complete.
+A sign-in window will appear. In that window, sign into your Azure account by using your Azure account credentials. After you've successfully signed in, you can close the browser window and begin using AzCopy.
-### Authorize by using a system-wide managed identity
+### Authorize by using a system-wide managed identity (azcopy login command)
First, make sure that you've enabled a system-wide managed identity on your VM. See [System-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity).
-Type the following command, and then press the ENTER key.
+Then, in your command console, type the following command, and then press the ENTER key.
-```bash
-export AZCOPY_AUTO_LOGIN_TYPE=MSI
+```azcopy
+azcopy login --identity
```
-Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
-
-### Authorize by using a user-assigned managed identity
+### Authorize by using a user-assigned managed identity (azcopy login command)
First, make sure that you've enabled a user-assigned managed identity on your VM. See [User-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#user-assigned-managed-identity).
-Type the following command, and then press the ENTER key.
-
-```bash
-export AZCOPY_AUTO_LOGIN_TYPE=MSI
-```
-
-Then, type any of the following commands, and then press the ENTER key.
+Then, in your command console, type any of the following commands, and then press the ENTER key.
-```bash
-export AZCOPY_MSI_CLIENT_ID=<client-id>
+```azcopy
+azcopy login --identity --identity-client-id "<client-id>"
``` Replace the `<client-id>` placeholder with the client ID of the user-assigned managed identity.
-```bash
-export AZCOPY_MSI_OBJECT_ID=<object-id>
+```azcopy
+azcopy login --identity --identity-object-id "<object-id>"
``` Replace the `<object-id>` placeholder with the object ID of the user-assigned managed identity.
-```bash
-export AZCOPY_MSI_RESOURCE_STRING=<resource-id>
+```azcopy
+azcopy login --identity --identity-resource-id "<resource-id>"
``` Replace the `<resource-id>` placeholder with the resource ID of the user-assigned managed identity.
-After you set these variables, you can run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
-### Authorize a service principal
+## Authorize a service principal (azcopy login command)
+
+Before you run a script, you have to sign in interactively at least one time so that you can provide AzCopy with the credentials of your service principal. Those credentials are stored in a secured and encrypted file so that your script doesn't have to provide that sensitive information.
You can sign into your account by using a client secret or by using the password of a certificate that is associated with your service principal's app registration.
-#### Authorize a service principal by using a client secret
+To learn more about creating service principal, see [How to: Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
-Type the following command, and then press the ENTER key.
+#### Authorize a service principal by using a client secret (azcopy login command)
-```bash
-export AZCOPY_AUTO_LOGIN_TYPE=SPN
-export AZCOPY_SPA_APPLICATION_ID=<application-id>
-export AZCOPY_SPA_CLIENT_SECRET=<client-secret>
-export AZCOPY_TENANT_ID=<tenant-id>
-```
+Start by setting the `AZCOPY_SPA_CLIENT_SECRET` environment variable to the client secret of your service principal's app registration.
-Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<client-secret>` placeholder with the client secret. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+> [!NOTE]
+> Make sure to set this value from your command prompt, and not in the environment variable settings of your operating system. That way, the value is available only to the current session.
+
+This example shows how you could do this in PowerShell.
+
+```azcopy
+$env:AZCOPY_SPA_CLIENT_SECRET="$(Read-Host -prompt "Enter key")"
+```
> [!NOTE]
-> Consider using a prompt to collect the password from the user. That way, your password won't appear in your command history.
+> Consider using a prompt as shown in this example. That way, your password won't appear in your console's command history.
-Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
+Next, type the following command, and then press the ENTER key.
-#### Authorize a service principal by using a certificate
+```azcopy
+azcopy login --service-principal --application-id application-id --tenant-id=tenant-id
+```
+
+Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+
+#### Authorize a service principal by using a certificate (azcopy login command)
If you prefer to use your own credentials for authorization, you can upload a certificate to your app registration, and then use that certificate to log in.
-In addition to uploading your certificate to your app registration, you'll also need to have a copy of the certificate saved to the machine or VM where AzCopy will be running. This copy of the certificate should be in .PFX or .PEM format, and must include the private key. The private key should be password-protected.
+In addition to uploading your certificate to your app registration, you'll also need to have a copy of the certificate saved to the machine or VM where AzCopy will be running. This copy of the certificate should be in .PFX or .PEM format, and must include the private key. The private key should be password-protected. If you're using Windows, and your certificate exists only in a certificate store, make sure to export that certificate to a PFX file (including the private key). For guidance, see [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate)
-Type the following command, and then press the ENTER key.
+Next, set the `AZCOPY_SPA_CERT_PASSWORD` environment variable to the certificate password.
-```bash
-export AZCOPY_AUTO_LOGIN_TYPE=SPN
-export AZCOPY_SPA_APPLICATION_ID=<application-id>
-export AZCOPY_SPA_CERT_PATH=<path-to-certificate-file>
-export AZCOPY_SPA_CERT_PASSWORD=<certificate-password>
-export AZCOPY_TENANT_ID=<tenant-id>
+> [!NOTE]
+> Make sure to set this value from your command prompt, and not in the environment variable settings of your operating system. That way, the value is available only to the current session.
+
+This example shows how you could do this task in PowerShell.
+
+```azcopy
+$env:AZCOPY_SPA_CERT_PASSWORD="$(Read-Host -prompt "Enter key")"
```
-Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<certificate-password>` placeholder with the password of the certificate. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+Next, type the following command, and then press the ENTER key.
-> [!NOTE]
-> Consider using a prompt to collect the password from the user. That way, your password won't appear in your command history.
+```azcopy
+azcopy login --service-principal --application-id application-id --certificate-path <path-to-certificate-file> --tenant-id=<tenant-id>
+```
-Then, run any azcopy command (For example: `azcopy list https://contoso.blob.core.windows.net`).
+Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+
+> [!NOTE]
+> Consider using a prompt as shown in this example. That way, your password won't appear in your console's command history.
## Next steps
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
New-AzStorageAccount -ResourceGroupName $rgName `
-Name $accountName ` -Location $location ` -SkuName Standard_GRS `
+ -AllowBlobPublicAccess $false `
-MinimumTlsVersion TLS1_1 # Read the MinimumTlsVersion property.
az storage account create \
--resource-group <resource-group> \ --kind StorageV2 \ --location <location> \
+ --allow-blob-public-access false \
--min-tls-version TLS1_1 az storage account show \
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
if ($osver.Equals([System.Version]::new(10, 0, 20348, 0))) {
-Uri https://aka.ms/afs/agent/Server2012R2 ` -OutFile "StorageSyncAgent.msi" } else {
- throw [System.PlatformNotSupportedException]::new("Azure File Sync is only supported on Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019")
+ throw [System.PlatformNotSupportedException]::new("Azure File Sync is only supported on Windows Server 2012 R2, Windows Server 2016, Windows Server 2019 and Windows Server 2022")
} # Install the MSI. Start-Process is used to PowerShell blocks until the operation is complete.
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
description: Learn about file shares hosted in Azure Files using the Server Mess
Previously updated : 03/31/2023 Last updated : 09/29/2023
Azure Files exposes the following settings:
- **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are AES-256 (recommended) and RC4-HMAC. - **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM.
-The SMB security settings can be viewed and changed using the Azure portal, PowerShell, or CLI. Please select the desired tab to see the steps on how to get and set the SMB security settings.
+You can view and change the SMB security settings using the Azure portal, PowerShell, or CLI. Select the desired tab to see the steps on how to get and set the SMB security settings.
# [Portal](#tab/azure-portal) To view or change the SMB security settings using the Azure portal, follow these steps:
To view or change the SMB security settings using the Azure portal, follow these
After you've entered the desired security settings, select **Save**. # [PowerShell](#tab/azure-powershell)
-To get the SMB protocol settings, use the `Get-AzStorageFileServiceProperty` cmdlet. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these PowerShell commands.
+To get the SMB protocol settings, use the `Get-AzStorageFileServiceProperty` cmdlet. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment. If you've deliberately set any of your SMB security settings to null, for example by disabling SMB channel encryption, see the instructions in the script about commenting out certain lines.
```PowerShell $resourceGroupName = "<resource-group>"
$storageAccount = Get-AzStorageAccount `
# If you've never changed any SMB security settings, the values for the SMB security # settings returned by Azure Files will be null. Null returned values should be interpreted # as "default settings are in effect". To make this more user-friendly, the following
-# PowerShell commands replace null values with the human-readable default values.
+# PowerShell commands replace null values with the human-readable default values.
+# If you've deliberately set any of your SMB security settings to null, for example by
+# disabling SMB channel encryption, comment out the following four lines to avoid
+# changing the security settings back to defaults.
$smbProtocolVersions = "SMB2.1", "SMB3.0", "SMB3.1.1" $smbAuthenticationMethods = "NTLMv2", "Kerberos" $smbKerberosTicketEncryption = "RC4-HMAC", "AES-256"
Update-AzStorageFileServiceProperty `
``` # [Azure CLI](#tab/azure-cli)
-To get the status of the SMB security settings, use the `az storage account file-service-properties show` command. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these Bash commands.
+To get the status of the SMB security settings, use the `az storage account file-service-properties show` command. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these Bash commands. If you've deliberately set any of your SMB security settings to null, for example by disabling SMB channel encryption, see the instructions in the script about commenting out certain lines.
```bash RESOURCE_GROUP_NAME="<resource-group>"
STORAGE_ACCOUNT_NAME="<storage-account>"
# If you've never changed any SMB security settings, the values for the SMB security # settings returned by Azure Files will be null. Null returned values should be interpreted
-# as "default settings are in effect". To make this more user-friendly, the following
-# PowerShell commands replace null values with the human-readable default values.
+# as "default settings are in effect". To make this more user-friendly, the commands in the
+# following two sections replace null values with the human-readable default values.
+# If you've deliberately set any of your SMB security settings to null, for example by
+# disabling SMB channel encryption, comment out the following two sections before
+# running the script to avoid changing the security settings back to defaults.
# Values to be replaced REPLACESMBPROTOCOLVERSION="\"smbProtocolVersions\": null"
PROTOCOLSETTINGS="${protocolSettings/$REPLACESMBKERBEROSTICKETENCRYPTION/$DEFAUL
echo $PROTOCOLSETTINGS ```
-Depending on your organizations security, performance, and compatibility requirements, you may wish to modify the SMB protocol settings. The following Azure CLI command restricts your SMB file shares to only the most secure options.
+Depending on your organization's security, performance, and compatibility requirements, you might wish to modify the SMB protocol settings. The following Azure CLI command restricts your SMB file shares to only the most secure options.
> [!Important]
-> Restricting SMB Azure file shares to only the most secure options may result in some clients not being able to connect if they do not meet the requirements. For example, AES-256-GCM was introduced as an option for SMB channel encryption starting in Windows Server 2022 and Windows 11. This means that older clients that do not support AES-256-GCM will not be able to connect.
+> Restricting SMB Azure file shares to only the most secure options might result in some clients not being able to connect if they don't meet the requirements. For example, AES-256-GCM was introduced as an option for SMB channel encryption starting in Windows Server 2022 and Windows 11. This means that older clients that don't support AES-256-GCM won't be able to connect.
```azurecli az storage account file-service-properties update \
az storage account file-service-properties update \
## Limitations
-SMB file shares in Azure Files support a subset of features supported by SMB protocol and the NTFS file system. Although most use cases and applications do not require these features, some applications may not work properly with Azure Files if they rely on unsupported features. The following features are not supported:
+SMB file shares in Azure Files support a subset of features supported by SMB protocol and the NTFS file system. Although most use cases and applications do not require these features, some applications might not work properly with Azure Files if they rely on unsupported features. The following features aren't supported:
- [SMB Direct](/windows-server/storage/file-server/smb-direct) - SMB directory leasing
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 09/26/2023 Last updated : 09/29/2023 # Kafka output from Azure Stream Analytics (Preview)
You can use four types of security protocols to connect to your Kafka clusters:
|-|--| |mTLS |encryption and authentication | |SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. |
-|Kafka topic |A unit of your Kafka cluster you want to write events to. |
|SASL_PLAINTEXT |standard authentication with username and password without encryption | |None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. |
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Previously updated : 09/26/2023 Last updated : 09/29/2023 # Stream data from Kafka into Azure Stream Analytics (Preview)
You can use four types of security protocols to connect to your Kafka clusters:
|-|--| |mTLS |encryption and authentication | |SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. |
-|Kafka topic |A unit of your Kafka cluster you want to write events to. |
|SASL_PLAINTEXT |standard authentication with username and password without encryption | |None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. |
synapse-analytics Setup Environment Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/setup-environment-cognitive-services.md
To get started on Azure Kubernetes Service, follow these steps:
1. [Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md)
-1. [Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark)
+1. [Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark) - warning: [Spark 2.4](../spark/apache-spark-24-runtime.md) is retired and out of the support.
1. [Install an Azure AI container using Helm](../../ai-services/computer-vision/deploy-computer-vision-on-premises.md)
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Title: Azure Synapse Runtime for Apache Spark 2.4 (EOLA)
-description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 2.4.
-
+ Title: Azure Synapse Runtime for Apache Spark 2.4 (unsupported)
+description: Versions of Spark, Scala, Python, and .NET for Apache Spark 2.4.
+
-# Azure Synapse Runtime for Apache Spark 2.4 (EOLA)
+# Azure Synapse Runtime for Apache Spark 2.4 (unsupported)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 2.4.
-> [!IMPORTANT]
-> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 2.4 has been announced July 29, 2022.
-> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 2.4 will be retired and disabled as of September 29, 2023. After the EOL date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace.
-> * We recommend that you upgrade your Apache Spark 2.4 workloads to version 3.3 at your earliest convenience.
+> [!WARNING]
+> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4
+> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
+> * Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
+> * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
+> * We strongly advise to proactively upgrade their workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)).
+ ## Component versions | Component | Version |
synapse-analytics Apache Spark External Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md
Azure Synapse Analytics allows Apache Spark pools in the same workspace to share
## Supported Hive Metastore versions
-The feature works with both Spark 2.4 and Spark 3.1. The following table shows the supported Hive Metastore versions for each Spark version.
+The feature works with Spark 3.1. The following table shows the supported Hive Metastore versions for each Spark version.
|Spark Version|HMS 0.13.X|HMS 1.2.X|HMS 2.1.X|HMS 2.3.x|HMS 3.1.X| |--|--|--|--|--|--|
If the underlying data of your Hive tables are stored in Azure Blob storage acco
3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used in Spark configuration shortly. 4. Select the Azure Blob Storage account. Make sure Authentication method is **Account key**. Currently Spark pool can only access Blob Storage account via account key. 5. **Test connection** and click **Create**.
-6. After creating the linked service to Blob Storage account, when you run Spark queries, make sure you run below Spark code in the notebook to get access to the the Blob Storage account for the Spark session. Learn more about why you need to do this [here](./apache-spark-secure-credentials-with-tokenlibrary.md).
+6. After creating the linked service to Blob Storage account, when you run Spark queries, make sure you run below Spark code in the notebook to get access to the Blob Storage account for the Spark session. Learn more about why you need to do this [here](./apache-spark-secure-credentials-with-tokenlibrary.md).
```python %%pyspark
After setting up storage connections, you can query the existing tables in the H
No credentials found for account xxxxx.blob.core.windows.net in the configuration, and its container xxxxx is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials. ```
-When use key authentication to your storage account via linked service, you need to take an extra step to get the token for Spark session. Run below code to configure your Spark session before running the query. Learn more about why you need to do this here.
+When using key authentication to your storage account via linked service, you need to take an extra step to get the token for Spark session. Run below code to configure your Spark session before running the query. Learn more about why you need to do this here.
```python %%pyspark
You can easily fix this issue by appending `/usr/hdp/current/hadoop-client/*` to
```text Eg: spark.sql.hive.metastore.jars":"/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*
-```
+```
synapse-analytics Apache Spark Intelligent Cache Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-intelligent-cache-concept.md
You won't see the benefit of this feature if:
* Your workload requires large amounts of shuffle, then disabling the Intelligent Cache will free up available space to prevent your job from failing due to insufficient storage space.
-* You're using a Spark 2.4 pool, you'll need to upgrade your pool to the latest version of Spark.
+* You're using a Spark 3.1 pool, you'll need to upgrade your pool to the latest version of Spark.
## Learn more
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
This document is also available in notebook form, for [Python](https://github.co
## Setup >[!Note]
-> Hyperspace is supported in Azure Synapse Runtime for Apache Spark 2.4 (EOLA), Azure Synapse Runtime for Apache Spark 3.1 (EOLA), and Azure Synapse Runtime for Apache Spark 3.2 (EOLA). However, it should be noted that Hyperspace is not supported in Azure Synapse Runtime for Apache Spark 3.3 (GA).
+> Hyperspace is supported in Azure Synapse Runtime for Apache Spark 3.1 (EOLA), and Azure Synapse Runtime for Apache Spark 3.2 (EOLA). However, it should be noted that Hyperspace is not supported in Azure Synapse Runtime for Apache Spark 3.3 (GA).
To begin with, start a new Spark session. Since this document is a tutorial merely to illustrate what Hyperspace can offer, you will make a configuration change that allows us to highlight what Hyperspace is doing on small datasets.
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Title: Apache Spark version support description: Supported versions of Spark, Scala, Python, .NET-+
# Azure Synapse runtimes
-Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime will be upgraded periodically to include new improvements, features, and patches.
-
-When you create a serverless Apache Spark pool, you will have the option to select the corresponding Apache Spark version. Based on this, the pool will come pre-installed with the associated runtime components and packages. The runtimes have the following advantages:
-
+Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime will be upgraded periodically to include new improvements, features, and patches. When you create a serverless Apache Spark pool, you will have the option to select the corresponding Apache Spark version. Based on this, the pool will come pre-installed with the associated runtime components and packages. The runtimes have the following advantages:
- Faster session startup times - Tested compatibility with specific Apache Spark versions - Access to popular, compatible connectors and open-source packages +
+## Supported Azure Synapse runtime releases
+
+> [!WARNING]
+> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4
+> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
+> * Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
+> * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
+> * We strongly advise to proactively upgrade their workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)).
+
+The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases.
+
+| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date |
+|-|-||-|-|
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Nov 17, 2023 | Nov 17, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life (EOL)__ | __July 29, 2022__ | __September 29, 2023__ |
+
+## Runtime release stages
+
+For the complete runtime for Apache Spark lifecycle and support policies, refer to [Synapse runtime for Apache Spark lifecycle and supportability](./runtime-for-apache-spark-lifecycle-and-supportability.md).
+
+## Runtime patching
+
+Azure Synapse runtime for Apache Spark patches are rolled out monthly containing bug, feature and security fixes to the Apache Spark core engine, language environments, connectors and libraries.
++ > [!NOTE] > - Maintenance updates will be automatically applied to new sessions for a given serverless Apache Spark pool. > - You should test and validate that your applications run properly when using new runtime versions.
When you create a serverless Apache Spark pool, you will have the option to sele
> * ```org/apache/log4j/chainsaw/*``` > > While the above classes were not used in the default Log4j configurations in Synapse, it is possible that some user application could still depend on it. If your application needs to use these classes, use Library Management to add a secure version of Log4j to the Spark Pool. __Do not use Log4j version 1.2.17__, as it would be reintroducing the vulnerabilities.
->
-
-## Supported Azure Synapse runtime releases
-The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases.
-
-| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date |
-|-|-|-|-|-|
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Nov 17, 2023 | Nov 17, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life Announced (EOLA)__ | __July 29, 2022__ | __September 29, 2023__ |
-
-## Runtime release stages
-
-For the complete runtime for Apache Spark lifecycle and support policies, refer to [Synapse runtime for Apache Spark lifecycle and supportability](./runtime-for-apache-spark-lifecycle-and-supportability.md).
-
-## Runtime patching
-
-Azure Synapse runtime for Apache Spark patches are rolled out monthly containing bug, feature and security fixes to the Apache Spark core engine, language environments, connectors and libraries.
The patch policy differs based on the [runtime lifecycle stage](./runtime-for-apache-spark-lifecycle-and-supportability.md): 1. Generally Available (GA) runtime: Receive no upgrades on major versions (i.e. 3.x -> 4.x). And will upgrade a minor version (i.e. 3.x -> 3.y) as long as there are no deprecation or regression impacts.
synapse-analytics Apache Spark Cdm Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md
For information on defining Common Data Model documents by using Common Data Mod
At a high level, the connector supports:
-* Spark 2.4, 3.1, and 3.2.
+* 3.1, and 3.2., and 3.3.
* Reading data from an entity in a Common Data Model folder into a Spark DataFrame. * Writing from a Spark DataFrame to an entity in a Common Data Model folder based on a Common Data Model entity definition. * Writing from a Spark DataFrame to an entity in a Common Data Model folder based on the DataFrame schema.
synapse-analytics Apache Spark Kusto Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-kusto-connector.md
# Azure Data Explorer (Kusto) connector for Apache Spark
-The Azure Data Explorer (Kusto) connector for Apache Spark is designed to efficiently transfer data between Kusto clusters and Spark. This connector is available in Python, Java, and .NET. It is built in to the Azure Synapse Apache Spark 2.4 runtime (EOLA).
+The Azure Data Explorer (Kusto) connector for Apache Spark is designed to efficiently transfer data between Kusto clusters and Spark. This connector is available in Python, Java, and .NET.
## Authentication When using Azure Synapse Notebooks or Apache Spark job definitions, the authentication between systems is made seamless with the linked service. The Token Service connects with Azure Active Directory to obtain security tokens for use when accessing the Kusto cluster.
-For Azure Synapse Pipelines, the authentication will use the service principal name. Currently, managed identities are not supported with the Azure Data Explorer connector.
+For Azure Synapse Pipelines, the authentication uses the service principal name. Currently, managed identities aren't supported with the Azure Data Explorer connector.
## Prerequisites
- - [Connect to Azure Data Explorer](../../quickstart-connect-azure-data-explorer.md): You will need to set up a Linked Service to connect to an existing Kusto cluster.
+ - [Connect to Azure Data Explorer](../../quickstart-connect-azure-data-explorer.md): You need to set up a Linked Service to connect to an existing Kusto cluster.
## Limitations
- - The Azure Data Explorer (Kusto) connector is currently only supported on the Azure Synapse Apache Spark 2.4 runtime (EOLA).
- The Azure Data Explorer linked service can only be configured with the Service Principal Name.
- - Within Azure Synapse Notebooks or Apache Spark Job Definitions, the Azure Data Explorer connector will use Azure AD pass-through to connect to the Kusto Cluster.
+ - Within Azure Synapse Notebooks or Apache Spark Job Definitions, the Azure Data Explorer connector uses Azure AD pass-through to connect to the Kusto Cluster.
## Use the Azure Data Explorer (Kusto) connector
synapse-analytics Low Shuffle Merge For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/low-shuffle-merge-for-apache-spark.md
It's available on Synapse Pools for Apache Spark versions 3.2 and 3.3.
|Version| Availability | Default | |--|--|--|
-| Delta 0.6 / Spark 2.4 | No | - |
-| Delta 1.2 / Spark 3.2 | Yes | false |
-| Delta 2.2 / Spark 3.3 | Yes | true |
+| Delta 0.6 / [Spark 2.4](./apache-spark-24-runtime.md) | No | - |
+| Delta 1.2 / [Spark 3.2](./apache-spark-32-runtime.md) | Yes | false |
+| Delta 2.2 / [Spark 3.3](./apache-spark-33-runtime.md) | Yes | true |
## Benefits of Low Shuffle Merge
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
The following features are available when you use .NET for Apache Spark in the A
### `DotNetRunner: null` / `Futures timeout` in Synapse Spark Job Definition Run
-Synapse Spark Job Definitions on Spark Pools using Spark 2.4 require `Microsoft.Spark` 1.0.0. Clear your `bin` and `obj` directories, and publish the project using 1.0.0.
+Synapse Spark Job Definitions on Spark Pools using [Spark 2.4](./apache-spark-24-runtime.md) require `Microsoft.Spark` 1.0.0. Clear your `bin` and `obj` directories, and publish the project using 1.0.0.
### OutOfMemoryError: java heap space at org.apache.spark
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
This section presents reference code templates to describe how to use and invoke
> [!Note] > Using the Connector in Python-
-> * The connector is supported in Python for Spark 3 only. For Spark 2.4, we can use the Scala connector API to interact with content from a DataFrame in PySpark by using DataFrame.createOrReplaceTempView or DataFrame.createOrReplaceGlobalTempView. See Section - [Using materialized data across cells](#using-materialized-data-across-cells).
+> * The connector is supported in Python for Spark 3 only. For [Spark 2.4 (unsupported)](./apache-spark-24-runtime.md), we can use the Scala connector API to interact with content from a DataFrame in PySpark by using DataFrame.createOrReplaceTempView or DataFrame.createOrReplaceGlobalTempView. See Section - [Using materialized data across cells](#using-materialized-data-across-cells).
> * The call back handle is not available in Python. ### Read from Azure Synapse Dedicated SQL Pool
dfToReadFromTable.show()
> * Table name and query cannot be specified at the same time. > * Only select queries are allowed. DDL and DML SQLs are not allowed. > * The select and filter options on dataframe are not pushed down to the SQL dedicated pool when a query is specified.
-> * Read from a query is only available in Spark 3.1 and 3.2. It is not available in Spark 2.4.
+> * Read from a query is only available in Spark 3.1 and 3.2.
##### [Scala](#tab/scala2)
dfToReadFromQueryAsArgument.show()
#### Write Request - `synapsesql` method signature
-The method signature for the Connector version built for Spark 2.4.8 has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
+The method signature for the Connector version built for [Spark 2.4.8](./apache-spark-24-runtime.md) has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
* Spark Pool Version 2.4.8
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Finally, make sure the appropriate roles are granted and have not been revoked.
### Unable to create new database as the request will use the old/expired key
-This error is caused by changing workspace customer managed key used for enryption. You can choose to re-encrypt all the data in the workspace with the latest version of the active key. To-re-encrypt, change the key in the Azure portal to a temporary key and then switch back to the key you wish to use for encryption. Learn here how to [manage the workspace keys](../security/workspaces-encryption.md#manage-the-workspace-customer-managed-key).
+This error is caused by changing workspace customer managed key used for encryption. You can choose to re-encrypt all the data in the workspace with the latest version of the active key. To-re-encrypt, change the key in the Azure portal to a temporary key and then switch back to the key you wish to use for encryption. Learn here how to [manage the workspace keys](../security/workspaces-encryption.md#manage-the-workspace-customer-managed-key).
-### Synapse serverless SQL pool is unavailable after transfering a subscription to a different Azure AD tenant
+### Synapse serverless SQL pool is unavailable after transferring a subscription to a different Azure AD tenant
-If you moved a subscription to another Azure AD tenant, you might experience some issues with serverless SQL pool. Create a support ticket and Azure suport will contact you to resolve the issue.
+If you moved a subscription to another Azure AD tenant, you might experience some issues with serverless SQL pool. Create a support ticket and Azure support will contact you to resolve the issue.
## Storage access
If your query returns NULL values instead of partitioning columns or can't find
The error `Inserting value to batch for column type DATETIME2 failed` indicates that the serverless pool can't read the date values from the underlying files. The datetime value stored in the Parquet or Delta Lake file can't be represented as a `DATETIME2` column.
-Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using the Spark 2.4 version or with the higher Spark version that still uses legacy datetime storage format, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools.
+Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using the [Spark 2.4 (unsupported runtime version)](../spark/apache-spark-24-runtime.md) version or with the higher Spark version that still uses legacy datetime storage format, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools.
There might be a two-day difference between the Julian calendar used to write the values in Parquet (in some Spark versions) and the proleptic Gregorian calendar used in serverless SQL pool. This difference might cause conversion to a negative date value, which is invalid.
If you are exporting your [Dataverse table to Azure Data Lake storage](/power-ap
### Delta tables in Lake databases are not available in serverless SQL pool
-Make sure that your workspace Managed Identity has read access on the ADLS storage that contains Delta folder. The serverless SQL pool reads the Delta Lake table schema from the Delta log that are placed in ADLS and use the workspace Managed Identity to access the Delta transaction logs.
+Make sure that your workspace Managed Identity has read access on the ADLS storage that contains Delta folder. The serverless SQL pool reads the Delta Lake table schema from the Delta logs that are placed in ADLS and uses the workspace Managed Identity to access the Delta transaction logs.
Try to set up a data source in some SQL Database that references your Azure Data Lake storage using Managed Identity credential, and try to [create external table on top of data source with Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#access-a-data-source-using-credentials) to confirm that a table with the Managed Identity can access your storage.
virtual-desktop Onedrive Remoteapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/onedrive-remoteapp.md
+
+ Title: Use Microsoft OneDrive with a RemoteApp - Azure Virtual Desktop
+description: Learn how to use Microsoft OneDrive with a RemoteApp in Azure Virtual Desktop.
+++ Last updated : 09/27/2023++
+# Use Microsoft OneDrive with a RemoteApp in Azure Virtual Desktop
+
+You can use Microsoft OneDrive alongside a RemoteApp in Azure Virtual Desktop, allowing users to access and synchronize their files while using a RemoteApp. When a user connects to a RemoteApp, OneDrive can automatically launch as a companion to the RemoteApp. This article describes how to configure OneDrive to automatically launch alongside a RemoteApp in Azure Virtual Desktop.
+
+> [!IMPORTANT]
+> - You should only use OneDrive with a RemoteApp for testing purposes as it requires an Insider Preview build of Windows 11 for your session hosts.
+>
+> - You can't use the OneDrive setting **Start OneDrive automatically when I sign in to Windows**.
+
+## User experience
+
+Once configured, when a user launches a RemoteApp, the OneDrive icon is integrated in the taskbar of their local Windows device. If a user launches another RemoteApp from the same host pool on the same session host, the same instance of OneDrive is used and another doesn't start.
+
+If your session hosts are joined to Microsoft Entra ID, you can [silently configure user accounts](/sharepoint/use-silent-account-configuration) so users are automatically signed in to OneDrive and start synchronizing straight away. Otherwise, users need to sign in to OneDrive on first use.
+
+The icon for the instance of OneDrive accompanying the RemoteApp in the system tray looks the same as if OneDrive is installed on a local device. You can differentiate the OneDrive icon from the remote session by hovering over the icon where the tooltip includes the word **Remote**.
+
+When a user closes or disconnects from the last RemoteApp they're using on the session host, OneDrive exits within a few minutes, unless the user has the OneDrive Action Center window open.
+
+## Prerequisites
+
+Before you can use OneDrive with a RemoteApp in Azure Virtual Desktop, you need:
+
+- A pooled host pool that is configured as a [validation environment](configure-validation-environment.md).
+
+- Session hosts in the host pool that:
+
+ - Are running Windows 11 Insider Preview Enterprise multi-session, version 22H2, build 25905 or later. To get Insider Preview builds for multi-session, you need to start with a non-Insider build, join session hosts to the Windows Insider Program, then install the preview build. For more information on the Windows Insider Program, see [Getting started with the Windows Insider Program](https://www.microsoft.com/windowsinsider/getting-started).
+
+ - Have the latest version of FSLogix installed. For more information, see [Install FSLogix applications](/fslogix/how-to-install-fslogix).
+
+## Configure OneDrive to launch with a RemoteApp
+
+To configure OneDrive to launch with a RemoteApp in Azure Virtual Desktop, follow these steps:
+
+1. Download and install the latest version of the [OneDrive sync app](https://www.microsoft.com/microsoft-365/onedrive/download) per-machine on your session hosts. For more information, see [Install the sync app per-machine](/sharepoint/per-machine-installation).
+
+1. If your session hosts are joined to Microsoft Entra ID, [silently configure user accounts](/sharepoint/use-silent-account-configuration) for OneDrive on your session hosts, so users are automatically signed in to OneDrive.
+
+1. On your session hosts, set the following registry value:
+
+ - **Key**: `HKLM\Software\Microsoft\Windows\CurrentVersion\Run`
+ - **Type**: `REG_SZ`
+ - **Name**: `OneDrive`
+ - **Data**: `"C:\Program Files\Microsoft OneDrive\OneDrive.exe" /background`
+
+ You can configure the registry using an enterprise deployment tool such as Intune, Configuration Manager, or Group Policy. Alternatively, to set this registry value using PowerShell, open PowerShell as an administrator and run the following command:
+
+ ```powershell
+ New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" -Name OneDrive -PropertyType String -Value '"C:\Program Files\Microsoft OneDrive\OneDrive.exe\" /background' -Force
+ ```
+
+## Test OneDrive with a RemoteApp
+
+To test OneDrive with a RemoteApp, follow these steps:
+
+1. Connect to a RemoteApp from the host pool and check that the OneDrive icon can be seen on the task bar of your local Windows device.
+
+1. Check that OneDrive is synchronizing files by opening the OneDrive Action Center. Sign in to OneDrive if you weren't automatically signed in.
+
+1. From the RemoteApp, check that you can access your files from OneDrive.
+
+1. Finally, close the RemoteApp and any others from the same session host, and within a few minutes OneDrive should exit.
+
+## OneDrive recommendations
+
+When using OneDrive with a RemoteApp in Azure Virtual Desktop, we recommend that you configure the following settings using the OneDrive administrative template. For more information, see [Manage OneDrive using Group Policy](/sharepoint/use-group-policy#manage-onedrive-using-group-policy) and [Use administrative templates in Intune](/sharepoint/configure-sync-intune).
+
+- [Allow syncing OneDrive accounts for only specific organizations](/sharepoint/use-group-policy#allow-syncing-onedrive-accounts-for-only-specific-organizations).
+- [Use OneDrive files On-Demand](/sharepoint/use-group-policy#use-onedrive-files-on-demand).
+- [Silently move Windows known folders to OneDrive](/sharepoint/use-group-policy#silently-move-windows-known-folders-to-onedrive).
+- [Silently sign-in users to the OneDrive sync app with their Windows credentials](/sharepoint/use-group-policy#silently-sign-in-users-to-the-onedrive-sync-app-with-their-windows-credentials).
virtual-desktop Classic Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/classic-retirement.md
+
+ Title: Azure Virtual Desktop (classic) retirement - Azure
+description: Information about the retirement of Azure Virtual Desktop (classic).
+++ Last updated : 09/27/2023++
+# Azure Virtual Desktop (classic) retirement
+
+> [!IMPORTANT]
+> This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
+
+Azure Virtual Desktop (classic) will retire on **September 30, 2026**. You should transition to Azure Virtual Desktop before that date.
+
+[Azure Virtual Desktop](../overview.md) replaces Azure Virtual Desktop (classic). Here are some of the benefits of using Azure Virtual Desktop instead of Azure Virtual Desktop (classic):
+
+- Deployments via Azure Resource Manager (ARM)
+- Unified resource management
+- Improved networking and security
+- Scaling and automation features
+- Feature availability and updates
+
+## Retirement timeline
+
+Beginning **September 30, 2023**, you'll no longer be able to create new Azure Virtual Desktop (classic) tenants. Existing Azure Virtual Desktop (classic) resources can still be managed, migrated, and are supported through **September 30, 2026**.
+
+> [!IMPORTANT]
+> If you have more than 500 application groups or manage multi-tenant environments, you can [request an exemption](#exemption-process).
+
+## Required action
+
+To avoid service disruptions, migrate to Azure Virtual Desktop before September 30, 2026. Here are some articles to help you migrate:
+
+- [Migrate manually from Azure Virtual Desktop (classic)](../manual-migration.md)
+- [Migrate automatically from Azure Virtual Desktop (classic)](../automatic-migration.md)
+
+## Exemption process
+
+To be able to continue creating tenants in Azure Virtual Desktop (classic), you need to create an exemption. An exemption is available if you have more than 500 application groups or manage multitenant environments. To create an exemption:
+
+1. Browse to [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+
+1. On the **Problem description** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Issue type | Select **Technical** from the drop-down list |
+ | Subscription | Select a subscription containing Azure Virtual Desktop (classic) resources from the drop-down list. |
+ | Service | Select **My services**. |
+ | Service type | Select **Azure Virtual Desktop** from the drop-down list |
+ | Resource | Select an Azure Virtual Desktop (classic) resource from the drop-down list. |
+ | Summary | Enter a description of your issue. |
+ | Problem type | Select **Issues configuring Azure Virtual Desktop (classic)** from the drop-down list. |
+ | Problem subtype | Select **Tenant creation exemption request** from the drop-down list. |
+
+1. Complete the remaining tabs and select **Create**.
+
+## Help and support
+
+If you have a support plan and you need technical help, see [Azure Virtual Desktop (classic) troubleshooting overview, feedback, and support](troubleshoot-set-up-overview-2019.md#create-a-support-request) for information on how to create a support request. You can also ask community experts questions at [Azure Virtual Desktop - Microsoft Q&A](/answers/tags/221/azure-virtual-desktop).
virtual-desktop Environment Setup 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/environment-setup-2019.md
Title: Azure Virtual Desktop (classic) environment - Azure
-description: The basic elements of a Azure Virtual Desktop (classic) environment.
+ Title: Azure Virtual Desktop (classic) terminology - Azure
+description: The terminology used for basic elements of a Azure Virtual Desktop (classic) environment.
Last updated 03/30/2020
-# Azure Virtual Desktop (classic) environment
+# Azure Virtual Desktop (classic) terminology
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../environment-setup.md).
virtual-desktop Tenant Setup Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md
# Tutorial: Create a tenant in Azure Virtual Desktop (classic)
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
-
->[!IMPORTANT]
->You can find more information about how to migrate from Azure Virtual Desktop (classic) to Azure Virtual Desktop at [Migrate automatically from Azure Virtual Desktop (classic)](../automatic-migration.md).
+> [!IMPORTANT]
+> - This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
>
->Try Azure Virtual Desktop by following our [Tutorial: Create and connect to a Windows 11 desktop with Azure Virtual Desktop](../tutorial-create-connect-personal-desktop.md).
+> - Beginning **September 30 2023**, you will no longer be able to create new Azure Virtual Desktop (classic) tenants. Azure Virtual Desktop (classic) will retire on **September 30, 2026**. You should transition to [Azure Virtual Desktop](../index.yml) before that date. For more information, see [Azure Virtual Desktop (classic) retirement](classic-retirement.md).
Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more application groups that are used to publish desktop and application resources to users. With a tenant, you can build host pools, create application groups, assign users, and make connections through the service.
virtual-desktop Troubleshoot Set Up Overview 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-overview-2019.md
This article provides an overview of the issues you may encounter when setting u
Visit the [Azure Virtual Desktop Tech Community](https://techcommunity.microsoft.com/t5/Windows-Virtual-Desktop/bd-p/WindowsVirtualDesktop) to discuss the Azure Virtual Desktop service with the product team and active community members.
-## Escalation tracks
+## Create a support request
-Use the following table to identify and resolve issues you may encounter when setting up a tenant environment using Remote Desktop client. Once your tenant's set up, you can use our new [Diagnostics service](diagnostics-role-service-2019.md) to identify issues for common scenarios.
+To create a support request for Azure Virtual Desktop (classic):
->[!NOTE]
+1. Browse to [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+
+1. On the **Problem description** tab, complete the following information. Some parameters are only shown based on other selections.
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Issue type | Select **Technical** from the drop-down list |
+ | Subscription | Select a subscription containing Azure Virtual Desktop (classic) resources from the drop-down list. |
+ | Service | Select **My services**. |
+ | Service type | Select **Azure Virtual Desktop** from the drop-down list |
+ | Resource | Select the Azure Virtual Desktop (classic) resource you're having an issue with from the drop-down list. |
+ | Summary | Enter a description of your issue. |
+ | Problem type | Select **Issues configuring Azure Virtual Desktop (classic)** from the drop-down list. |
+ | Problem subtype | Select the item which most describes your issue from the drop-down list. |
+
+1. Complete the remaining tabs and select **Create**.
+
+## Common issues and suggested solutions
+
+Use the following table to identify and resolve issues you may encounter when setting up a tenant environment using Remote Desktop client. You can also use our [Diagnostics service](diagnostics-role-service-2019.md) to identify issues for common scenarios.
+
+> [!NOTE]
> We have a Tech Community forum which you can visit to discuss your issues with the product team and active community members. Visit the [Azure Virtual Desktop Tech Community](https://techcommunity.microsoft.com/t5/Windows-Virtual-Desktop/bd-p/WindowsVirtualDesktop) to start a discussion. | **Issue** | **Suggested Solution** |
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
az vmss create -n myScaleSet \
--regular-priority-percentage 50 \ --orchestration-mode flexible \ --instance-count 4 \
- --image Centos \
+ --image CentOS85Gen2 \
--priority Spot \ --eviction-policy Deallocate \ --single-placement-group False \
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss#az-
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--orchestration-mode Flexible \ --admin-username azureuser \ --generate-ssh-keys
virtual-machine-scale-sets Tutorial Modify Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --orchestration-mode flexible \
- --image RHEL \
+ --image RHELRaw8LVMGen2 \
--admin-username azureuser \ --generate-ssh-keys \ --upgrade-policy Rolling \
Running [az vm show](/cli/azure/vm#az-vm-show) again, we now will see that the V
There are times where you might want to add a new VM to your scale set but want different configuration options than then listed in the scale set model. VMs can be added to a scale set during creation by using the [az vm create](/cli/azure/vmss#az-vmss-create) command and specifying the scale set name you want the instance added to. ```azurecli-interactive
-az vm create --name myNewInstance --resource-group myResourceGroup --vmss myScaleSet --image RHEL
+az vm create --name myNewInstance --resource-group myResourceGroup --vmss myScaleSet --image RHELRaw8LVMGen2
``` ```output
virtual-machine-scale-sets Tutorial Modify Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-powershell.md
Update-AzVM -ResourceGroupName "myResourceGroup" -VM $VirtualMachine
There are times where you might want to add a new VM to your scale set but want different configuration options than then listed in the scale set model. VMs can be added to a scale set during creation by using the [Get-AzVmss](/powershell/module/az.compute/get-azvmss) command and specifying the scale set name you want the instance added to. ```azurepowershell-interactive
-New-AzVM -Name myNewInstance -ResourceGroupName myResourceGroup -image UbuntuLTS -VmssId /subscriptions/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSet
+New-AzVM -Name myNewInstance -ResourceGroupName myResourceGroup -image Ubuntu2204 -VmssId /subscriptions/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSet
``` ```output
virtual-machine-scale-sets Tutorial Use Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-cli.md
Create a Virtual Machine Scale Set with the [az vmss create](/cli/azure/vmss#az-
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--orchestration-mode Flexible \ --admin-username azureuser \ --generate-ssh-keys \
virtual-machine-scale-sets Use Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/use-spot.md
The process to create a scale set with Azure Spot Virtual Machines is the same a
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--upgrade-policy-mode automatic \ --single-placement-group false \ --admin-username azureuser \
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
az group create --name <myResourceGroup> --location <VMSSLocation>
az vmss create \ --resource-group <myResourceGroup> \ --name <myVMScaleSet> \
- --image RHEL \
+ --image RHELRaw8LVMGen2 \
--admin-username <azureuser> \ --generate-ssh-keys \ --load-balancer <existingLoadBalancer> \
virtual-machine-scale-sets Virtual Machine Scale Sets Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md
To create a scale set and use a cloud-init file, add the `--custom-data` paramet
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--upgrade-policy-mode automatic \ --custom-data cloud-init.txt \ --admin-username azureuser \
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Fault Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --orchestration-mode Flexible \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--upgrade-policy-mode automatic \ --admin-username azureuser \ --platform-fault-domain-count 3\
virtual-machine-scale-sets Virtual Machine Scale Sets Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md
You can create a large Virtual Machine Scale Set using the [Azure CLI](https://g
```azurecli az group create -l southcentralus -n biginfra
-az vmss create -g biginfra -n bigvmss --image ubuntults --instance-count 1000
+az vmss create -g biginfra -n bigvmss --image Ubuntu2204 --instance-count 1000
``` The _vmss create_ command defaults certain configuration values if you do not specify them. To see the available options that you can override, try:
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
az vmss create \
--resource-group <myResourceGroup> \ --name <myVMScaleSet> \ --orchestration-mode flexible \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username <azureuser> \ --generate-ssh-keys \ --scale-in-policy OldestVM
virtual-machine-scale-sets Virtual Machine Scale Sets Terminate Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md
az group create --name <myResourceGroup> --location <VMSSLocation>
az vmss create \ --resource-group <myResourceGroup> \ --name <myVMScaleSet> \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username <azureuser> \ --generate-ssh-keys \ --terminate-notification-time 10
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md
az network lb rule create --resource-group MyResourceGroup --lb-name myLoadBalan
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--lb myLoadBalancer \ --health-probe myProbe \ --upgrade-policy-mode Rolling \
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set.md
az vmss create
--name myVMSS --location eastus --vm-sku Standard_Ds1_v2 image UbuntuLTS
+--image Ubuntu2204
--capacity-reservation-group /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/capacityReservationGroups/{capacityReservationGroupName} ```
virtual-machines Capacity Reservation Associate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-vm.md
az vm create
--name myVM --location eastus --size Standard_D2s_v3 image UbuntuLTS
+--image Ubuntu2204
--capacity-reservation-group /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/capacityReservationGroups/{capacityReservationGroupName} ```
virtual-machines Disks Deploy Zrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-zrs.md
dataDiskSku=Premium_ZRS
az vmss create -g $rgName \ -n $vmssName \ --encryption-at-host \image UbuntuLTS \
+--image Ubuntu2204 \
--upgrade-policy automatic \ --generate-ssh-keys \ --data-disk-sizes-gb 128 \
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br><br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network egress bandwidth (Mbps) |
||||||||| | Standard_E2d_v5 | 2 | 16 | 75 | 4 | 9000/125 | 2 | 12500 | | Standard_E4d_v5 | 4 | 32 | 150 | 8 | 19000/250 | 2 | 12500 |
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network egress bandwidth (Mbps) |
||||||||||| | Standard_E2ds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_E4ds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
virtual-machines Create Cli Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-cli-availability-zone.md
Create a virtual machine with the [az vm create](/cli/azure/vm) command.
When creating a virtual machine, several options are available such as operating system image, disk sizing, and administrative credentials. In this example, a virtual machine is created with a name of *myVM* running Ubuntu Server. The VM is created in availability zone *1*. By default, the VM is created in the *Standard_DS1_v2* size. ```azurecli-interactive
-az vm create --resource-group myResourceGroupVM --name myVM --location eastus2 --image UbuntuLTS --generate-ssh-keys --zone 1
+az vm create --resource-group myResourceGroupVM --name myVM --location eastus2 --image Ubuntu2204 --generate-ssh-keys --zone 1
``` It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information about the VM. Take note of the `zones` value, which indicates the availability zone in which the VM is running.
virtual-machines Create Cli Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-cli-complete.md
az vm create \
--location eastus \ --availability-set myAvailabilitySet \ --nics myNic \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
virtual-machines Mac Create Ssh Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/mac-create-ssh-keys.md
ssh-keygen -m PEM -t rsa -b 4096
If you use the [Azure CLI](/cli/azure) to create your VM with the [az vm create](/cli/azure/vm#az-vm-create) command, you can optionally generate SSH public and private key files using the `--generate-ssh-keys` option. The key files are stored in the ~/.ssh directory unless specified otherwise with the `--ssh-dest-key-path` option. If an ssh key pair already exists and the `--generate-ssh-keys` option is used, a new key pair won't be generated but instead the existing key pair will be used. In the following command, replace *VMname*, *RGname* and *UbuntuLTS* with your own values: ```azurecli-interactive
-az vm create --name VMname --resource-group RGname --image UbuntuLTS --generate-ssh-keys
+az vm create --name VMname --resource-group RGname --image Ubuntu2204 --generate-ssh-keys
``` ## Provide an SSH public key when deploying a VM
The public key that you place on your Linux VM in Azure is by default stored in
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --ssh-key-values mysshkey.pub ```
virtual-machines Multiple Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/multiple-nics.md
Create a VM with [az vm create](/cli/azure/vm). The following example creates a
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--size Standard_DS3_v2 \ --admin-username azureuser \ --generate-ssh-keys \
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/proximity-placement-groups.md
Create a VM within the proximity placement group using [new az vm](/cli/azure/vm
az vm create \ -n myVM \ -g myPPGGroup \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--ppg myPPG \ --generate-ssh-keys \ --size Standard_E64s_v4 \
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-powershell.md
New-AzVm `
-ResourceGroupName 'myResourceGroup' ` -Name 'myVM' ` -Location 'East US' `
- -Image Debian `
+ -image Debian11 `
-size Standard_B2s ` -PublicIpAddressName myPubIP ` -OpenPorts 80 `
virtual-machines Spot Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-cli.md
az group create -n mySpotGroup -l eastus
az vm create \ --resource-group mySpotGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys \ --priority Spot \
virtual-machines Ssh From Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/ssh-from-windows.md
Using the Azure CLI, you specify the path and filename for the public key using
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS\
+ --image Ubuntu2204\
--admin-username azureuser \ --ssh-key-value ~/.ssh/id_rsa.pub ```
virtual-machines Static Dns Name Resolution For Linux On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/static-dns-name-resolution-for-linux-on-azure.md
az vm create \
--resource-group myResourceGroup \ --name myVM \ --nics myNic \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --ssh-key-value ~/.ssh/id_rsa.pub ```
az vm create \
--resource-group myResourceGroup \ --name myVM \ --nics myNic \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --ssh-key-value ~/.ssh/id_rsa.pub ```
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md
Now create a VM with [az vm create](/cli/azure/vm#az-vm-create). Use the `--cust
az vm create \ --resource-group myResourceGroupAutomate \ --name myAutomatedVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys \ --custom-data cloud-init.txt
Now create a VM with [az vm create](/cli/azure/vm#az-vm-create). The certificate
az vm create \ --resource-group myResourceGroupAutomate \ --name myVMWithCerts \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys \ --custom-data cloud-init-secured.txt \
virtual-machines Tutorial Config Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-config-management.md
Now create a VM with [az vm create](/cli/azure/vm#az-vm-create). The following e
az vm create \ --resource-group myResourceGroupMonitor \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
virtual-machines Tutorial Elasticsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-elasticsearch.md
The following example creates a VM named *myVM* and creates SSH keys if they do
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
virtual-machines Tutorial Lamp Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-lamp-stack.md
The following example creates a VM named *myVM* and creates SSH keys if they don
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys ```
virtual-machines Tutorial Manage Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-disks.md
Create a VM using the [az vm create](/cli/azure/vm#az-vm-create) command. The fo
az vm create \ --resource-group myResourceGroupDisk \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--size Standard_DS2_v2 \ --admin-username azureuser \ --generate-ssh-keys \
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-vm.md
When you create a virtual machine, several options are available such as operati
az vm create \ --resource-group myResourceGroupVM \ --name myVM \
- --image SLES \
+ --image SuseSles15SP3 \
--public-ip-sku Standard \ --admin-username azureuser \ --generate-ssh-keys
In the previous VM creation example, a size was not provided, which results in a
az vm create \ --resource-group myResourceGroupVM \ --name myVM3 \
- --image SLES \
+ --image SuseSles15SP3 \
--size Standard_D2ds_v4 \ --generate-ssh-keys ```
virtual-machines Tutorial Secure Web Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-secure-web-server.md
Now create a VM with [az vm create](/cli/azure/vm). The certificate data is inje
az vm create \ --resource-group myResourceGroupSecureWeb \ --name myVM \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --generate-ssh-keys \ --custom-data cloud-init-web-server.txt \
virtual-machines Tutorial Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-virtual-network.md
az vm create \
--subnet myFrontendSubnet \ --nsg myFrontendNSG \ --public-ip-address myPublicIPAddress \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--generate-ssh-keys ```
az vm create \
--subnet myBackendSubnet \ --public-ip-address "" \ --nsg "" \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--generate-ssh-keys ```
virtual-machines Unmanaged Disks Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/unmanaged-disks-deprecation.md
With managed disks, you don't have to worry about managing storage accounts for
## How does this affect me? -- As of September 30, 2023, new customers won't be able to create unmanaged disks.
+- As of January 30, 2024, new customers won't be able to create unmanaged disks.
- On September 30, 2025, customers will no longer be able to start IaaS VMs by using unmanaged disks. Any VMs that are still running or allocated will be stopped and deallocated. ## What is being retired?
virtual-machines Vm Applications How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications-how-to.md
Previously updated : 02/03/2022- Last updated : 09/08/2023+
if ($remainder -ne 0){
} ```
-You need to make sure the files are publicly available, or you'll need the SAS URI for the files in your storage account. You can use [Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md) to quickly create a SAS URI if you don't already have one.
+Ensure the storage account has public level access or use an SAS URI with read privilege, as other restriction levels fail deployments. You can use [Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md) to quickly create a SAS URI if you don't already have one.
If you're using PowerShell, you need to be using version 3.11.0 of the Az.Storage module.
Choose an option below for creating your VM application definition and version:
1. Go to the [Azure portal](https://portal.azure.com), then search for and select **Azure Compute Gallery**. 1. Select the gallery you want to use from the list.
-1. On the page for your gallery, select **Add** from the top of the page and then select **VM application definition** from the drop-down. The **Create a VM application definition** page will open.
+1. On the page for your gallery, select **Add** from the top of the page and then select **VM application definition** from the drop-down. The **Create a VM application definition** page opens.
1. In the **Basics** tab, enter a name for your application and choose whether the application is for VMs running Linux or Windows. 1. Select the **Publishing options** tab if you want to specify any of the following optional settings for your VM application definition: - A description of the VM application definition.
Choose an option below for creating your VM application definition and version:
1. When you're done, select **Review + create**. 1. When validation completes, select **Create** to have the definition deployed. 1. Once the deployment is complete, select **Go to resource**.
-1. On the page for the application, select **Create a VM application version**. The **Create a VM Application Version** page will open.
+1. On the page for the application, select **Create a VM application version**. The **Create a VM Application Version** page opens.
1. Enter a version number like 1.0.0. 1. Select the region where you've uploaded your application package.
-1. Under **Source application package**, select **Browse**. Select the storage account, then the container where your package is located. Select the package from the list and then click **Select** when you're done. Alternatively, you can paste the SAS URI in this field if preferred.
+1. Under **Source application package**, select **Browse**. Select the storage account, then the container where your package is located. Select the package from the list and then select **Select** when you're done. Alternatively, you can paste the SAS URI in this field if preferred.
1. Type in the **Install script**. You can also provide the **Uninstall script** and **Update script**. See the [Overview](vm-applications.md#command-interpreter) for information on how to create the scripts. 1. If you have a default configuration file uploaded to a storage account, you can select it in **Default configuration**. 1. Select **Exclude from latest** if you don't want this version to appear as the latest version when you create a VM.
Select the VM application from the list, and then select **Save** at the bottom
:::image type="content" source="media/vmapps/select-app.png" alt-text="Screenshot showing selecting a VM application to install on the VM.":::
-If you've more than one VM application to install, you can set the install order for each VM application back on the **Advanced tab**.
+If you have more than one VM application to install, you can set the install order for each VM application back on the **Advanced tab**.
You can also deploy the VM application to currently running VMs. Select the **Extensions + applications** option under **Settings** in the left menu when viewing the VM details in the portal.
To show the VM application status for VMSS, go to the VMSS page, Instances, sele
VM applications require [Azure CLI](/cli/azure/install-azure-cli) version 2.30.0 or later.
-Create the VM application definition using [az sig gallery-application create](/cli/azure/sig/gallery-application#az_sig_gallery_application_create). In this example we're creating a VM application definition named *myApp* for Linux-based VMs.
+Create the VM application definition using [az sig gallery-application create](/cli/azure/sig/gallery-application#az_sig_gallery_application_create). In this example, we're creating a VM application definition named *myApp* for Linux-based VMs.
```azurecli-interactive
PUT
| defaultConfigurationLink | Optional. The url containing the default configuration, which may be overridden at deployment time. | Valid and existing storage url | | Install | The command to install the application | Valid command for the given OS | | Remove | The command to remove the application | Valid command for the given OS |
-| Update | Optional. The command to update the application. If not specified and an update is required, the old version will be removed and the new one installed. | Valid command for the given OS |
+| Update | Optional. The command to update the application. If not specified and an update is required, the old version is removed and the new one installed. | Valid command for the given OS |
| targetRegions/name | The name of a region to which to replicate | Validate Azure region | | targetRegions/regionalReplicaCount | Optional. The number of replicas in the region to create. Defaults to 1. | Integer between 1 and 3 inclusive | | endOfLifeDate | A future end of life date for the application version. Note this is for customer reference only, and isn't enforced. | Valid future date |
The order field may be used to specify dependencies between applications. The ru
| Case | Install Meaning | Failure Meaning | |--|--|--| | No order specified | Unordered applications are installed after ordered applications. There's no guarantee of installation order amongst the unordered applications. | Installation failures of other applications, be it ordered or unordered doesnΓÇÖt affect the installation of unordered applications. |
-| Duplicate order values | Application will be installed in any order compared to other applications with the same order. All applications of the same order will be installed after those with lower orders and before those with higher orders. | If a previous application with a lower order failed to install, no applications with this order will install. If any application with this order fails to install, no applications with a higher order will install. |
-| Increasing orders | Application will be installed after those with lower orders and before those with higher orders. | If a previous application with a lower order failed to install, this application won't install. If this application fails to install, no application with a higher order will install. |
+| Duplicate order values | Application is installed in any order compared to other applications with the same order. All applications of the same order will be installed after those with lower orders and before those with higher orders. | If a previous application with a lower order failed to install, no applications with this order install. If any application with this order fails to install, no applications with a higher order install. |
+| Increasing orders | Application will be installed after those with lower orders and before those with higher orders. | If a previous application with a lower order failed to install, this application won't install. If this application fails to install, no application with a higher order installs. |
-The response will include the full VM model. The following are the
+The response includes the full VM model. The following are the
relevant parts. ```rest
GET
/subscriptions/\<**subscriptionId**\>/resourceGroups/\<**resourceGroupName**\>/providers/Microsoft.Compute/virtualMachines/\<**VMName**\>/instanceView?api-version=2019-03-01 ```
-The result will look like this:
+The result looks like this:
```rest {
The result will look like this:
] } ```
-The VM App status is in the status message of the result of the VMApp extension in the instance view.
+The VM App status is in the status message of the result of the VM App extension in the instance view.
To get the status for a VMSS Application:
To get the status for a VMSS Application:
GET /subscriptions/\<**subscriptionId**\>/resourceGroups/\<**resourceGroupName**\>/providers/Microsoft.Compute/ virtualMachineScaleSets/\<**VMSSName**\>/virtualMachines/<**instanceId**>/instanceView?api-version=2019-03-01 ```
-The output will be similar to the VM example earlier.
+The output is similar to the VM example earlier.
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Previously updated : 07/17/2023 Last updated : 09/18/2023
Application packages provide benefits over other deployment and packaging method
- Support for virtual machines, and both flexible and uniform scale sets -- If you have Network Security Group (NSG) rules applied on your VM or scale set, downloading the packages from an internet repository might not be possible. And with storage accounts, downloading packages onto locked-down VMs would require setting up private links.
+- If you have Network Security Group (NSG) rules applied on your VM or scale set, downloading the packages from an internet repository might not be possible. And with storage accounts, downloading packages onto locked-down VMs would require setting up private links.
+
+- Support for Block Blobs: This feature allows the handling of large files efficiently by breaking them into smaller, manageable blocks. Ideal for uploading large amounts of data, streaming, and background uploading.
## What are VM app packages?
The VM application packages use multiple resource types:
- **No more than 3 replicas per region**: When you're creating a VM Application version, the maximum number of replicas per region is three. -- **Public access on storage**: Only public level access to storage accounts work, as other restriction levels fail deployments.
+- **Storage with public access or SAS URI with read privilege:** The storage account needs to has public level access or use an SAS URI with read privilege, as other restriction levels fail deployments.
- **Retrying failed installations**: Currently, the only way to retry a failed installation is to remove the application from the profile, then add it back.
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
$domainNameLabel = "d" + $rgname
$securePassword = <Password> | ConvertTo-SecureString -AsPlainText -Force $username = <Username> $credential = New-Object System.Management.Automation.PSCredential ($username, $securePassword)
-New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -Image CentOS85Gen2 -Credential $credential -DomainNameLabel $domainNameLabel
+New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -image CentOS85Gen285Gen2 -Credential $credential -DomainNameLabel $domainNameLabel
``` The Linux image alias names and their details are:
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
The following example creates a VM with a size that supports Accelerated Network
az vm create \ --resource-group <myResourceGroup> \ --name <myVm> \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--size Standard_DS4_v2 \ --admin-username <myAdminUser> \ --generate-ssh-keys \
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
--resource-group myResourceGroup \ --name myVM \ --nics myNIC1 \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --authentication-type ssh \ --generate-ssh-keys
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
--resource-group myResourceGroup \ --name myVM \ --nics myNIC1 \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username azureuser \ --authentication-type ssh \ --generate-ssh-keys
virtual-network Tutorial Connect Virtual Networks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md
Create a VM with [az vm create](/cli/azure/vm). The following example creates a
az vm create \ --resource-group myResourceGroup \ --name myVm1 \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork1 \ --subnet Subnet1 \ --generate-ssh-keys \
Create a VM in the *myVirtualNetwork2* virtual network.
az vm create \ --resource-group myResourceGroup \ --name myVm2 \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork2 \ --subnet Subnet1 \ --generate-ssh-keys
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
Create a VM to be used as the NVA in the *DMZ* subnet with [az vm create](/cli/a
az vm create \ --resource-group myResourceGroup \ --name myVmNva \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--public-ip-address "" \ --subnet DMZ \ --vnet-name myVirtualNetwork \
adminPassword="<replace-with-your-password>"
az vm create \ --resource-group myResourceGroup \ --name myVmPublic \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork \ --subnet Public \ --admin-username azureuser \
Create a VM in the *Private* subnet.
az vm create \ --resource-group myResourceGroup \ --name myVmPrivate \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork \ --subnet Private \ --admin-username azureuser \
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
adminPassword="<replace-with-your-password>"
az vm create \ --resource-group myResourceGroup \ --name myVmWeb \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork \ --subnet mySubnet \ --nsg "" \
Take note of the **publicIpAddress**. This address is used to access the VM from
az vm create \ --resource-group myResourceGroup \ --name myVmMgmt \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork \ --subnet mySubnet \ --nsg "" \
virtual-network Tutorial Restrict Network Access To Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md
Create a VM in the *Public* subnet with [az vm create](/cli/azure/vm). If SSH ke
az vm create \ --resource-group myResourceGroup \ --name myVmPublic \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork \ --subnet Public \ --generate-ssh-keys
Take note of the **publicIpAddress** in the returned output. This address is use
az vm create \ --resource-group myResourceGroup \ --name myVmPrivate \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--vnet-name myVirtualNetwork \ --subnet Private \ --generate-ssh-keys
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Before creating a peering, familiarize yourself with the [requirements and const
| Settings | Description | | -- | -- | | **This virtual network** | |
- | Peering link name | The name of the peering on this virtual network. The name must be unique within the virtual network. |
- | Allow access to remote virtual network | Option is selected by **default**. </br></br> - Select **Allow access to remote virtual network (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). |
- | Allow traffic to remote virtual network | Option is deselected by **default**. </br></br> - Select **Allow traffic to remote virtual network** if you want traffic to flow to the peered virtual network. You can deselect this setting if you have a peering between virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is deselected, traffic doesn't flow between the peered virtual networks. Traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Deselecting the **Allow traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
- | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Option is deselected by **default**. </br></br> - Select **Allow traffic forwarded from the remote virtual network (allow gateway transit)** if you want traffic **forwarded** by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. </br> For example, consider three virtual networks named ****Spoke1****, ****Spoke2****, and ****Hub****. A peering exists between each spoke virtual network and the ****Hub**** virtual network, but peerings don't exist between the spoke virtual networks. </br> A network virtual appliance is deployed in the **Hub** virtual network. User-defined routes are applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the **hub** virtual network, traffic doesn't flow between the spoke virtual networks because the **hub** isn't forwarding the traffic between the virtual networks. </br> While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
- | Use remote virtual network gateway or route server | Option is deselected by **default**. </br></br> - Select **Use remote virtual network gateway or route Server** </br></br> If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. </br> For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br></br> If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **deselected (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*.|
+ | Peering link name | The name of the peering from the local virtual network. The name must be unique within the virtual network. |
+ | Allow 'vnet-1' to access 'vnet-2' | By **default**, this option is selected. </br></br> - To enable communication between the two virtual networks through the default `VirtualNetwork` flow, select **Allow 'vnet-1' to access 'vnet-2' (default)**. This allows resources connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups includes the virtual network and peered virtual network when this setting is selected. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). |
+ | Allow 'vnet-1' to receive forwarded traffic from 'vnet-2' | This option **isn't selected by default.** </br></br> -To allow forwarded traffic from the peered virtual network, select **Allow 'vnet-1' to receive forwarded traffic from 'vnet-2'**. This setting can be selected if you want to allow traffic that doesn't originate from **vnet-2** to reach **vnet-1**. For example, if **vnet-2** has an NVA that receives traffic from outside of **vnet-2** that gets forwards to **vnet-1**, you can select this setting to allow that traffic to reach **vnet-1** from **vnet-2**. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *Not selecting the **Allow 'vnet-1' to receive forwarded traffic from 'vnet-2'** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
+ | Allow gateway in 'vnet-1' to forward traffic to 'vnet-2' | This option **isn't selected by default**. </br></br> - Select **Allow gateway in 'vnet-1' to forward traffic to 'vnet-2'** if you want **vnet-2** to receive traffic from **vnet-1**'s gateway/Route Server. **vnet-1** must contain a gateway in order for this option to be enabled. |
+ | Enable 'vnet-1' to use 'vnet-2' remote gateway | This option **isn't selected by default.** </br></br> - Select **Enable 'vnet-1' to use 'vnet-2' remote gateway** if you want **vnet-1** to use **vnet-2**'s gateway or Route Server. **vnet-1** can only use a remote gateway or Route Server from one peering connection. **vnet-2** has to have a gateway or Route Server in order for you to select this option. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br></br> You can also select this option, if you want this virtual network to use the remote Route Server to exchange routes, see [Azure Route Server](../route-server/overview.md). </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*.|
| **Remote virtual network** | |
- | Peering link name | The name of the peering on the remote virtual network. The name must be unique within the virtual network. |
+ | Peering link name | The name of the peering from the remote virtual network. The name must be unique within the virtual network. |
| Virtual network deployment model | Select which deployment model the virtual network you want to peer with was deployed through. |
- | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, select this checkbox. </br> Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the checkbox. </br> The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br></br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br></br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. |
- | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant.
+ | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, select this checkbox. |
+ | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br></br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br></br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant.
| Subscription | Select the [subscription](../azure-glossary-cloud-terminology.md#subscription) of the virtual network you want to peer with. One or more subscriptions are listed, depending on how many subscriptions your account has read access to. If you checked the **I know my resource ID** checkbox, this setting isn't available. | | Virtual network | Select the virtual network you want to peer with. You can select a virtual network created through either Azure deployment model. If you want to select a virtual network in a different region, you must select a virtual network in a [supported region](#cross-region). You must have read access to the virtual network for it to be visible in the list. If a virtual network is listed, but grayed out, it may be because the address space for the virtual network overlaps with the address space for this virtual network. If virtual network address spaces overlap, they can't be peered. If you checked the **I know my resource ID** checkbox, this setting isn't available. |
- | Allow access to current virtual network | Option is selected by **default**. </br></br> - Select **Allow access to current virtual network** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). |
- | Allow traffic to current virtual network | Option is selected by **default**. </br></br> - Select **Allow traffic to current virtual network** if you want traffic to flow to the peered virtual network by default. You can deselect this setting if you have a peering between two virtual networks but occasionally want to disable traffic flow between them. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks. Traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Deselecting the **Allow traffic to current virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
- | Allow traffic forwarded from current virtual network (allow gateway transit) | Option is deselected by **default**. </br></br> - Select **Allow traffic forwarded from current virtual network (allow gateway transit)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named **Spoke1**, **Spoke2**, and **Hub**. A peering exists between each spoke virtual network and the **Hub** virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the **Hub** virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the **hub** virtual network, traffic doesn't flow between the spoke virtual networks because the **hub** isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
- | Use current virtual network gateway or route server | Option is deselected by **default**. </br></br> Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use current virtual network gateway or route server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use remote virtual network gateway or route server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **Deselected (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a **hub** (see the **hub** and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
+ | Allow 'vnet-2' to access 'vnet-1' | By **default**, this option is selected. </br></br> - Select **Allow 'vnet-2' to access 'vnet-1'** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). |
+ | Allow 'vnet-2' to receive forwarded traffic from 'vnet-1' | This option **isn't selected by default**. </br></br> -To allow forwarded traffic from the peered virtual network, select **Allow 'vnet-2' to receive forwarded traffic from 'vnet-1'**. This setting can be selected if you want to allow traffic that doesn't originated from **vnet-1** to reach **vnet-2**. For example, if **vnet-1** has an NVA that receives traffic from outside of **vnet-1** that gets forwards to **vnet-2**, you can select this setting to allow that traffic to reach **vnet-2** from **vnet-1**. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *Not selecting the **Allow 'vnet-1' to receive forwarded traffic from 'vnet-2'** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
+ | Allow gateway in 'vnet-2' to forward traffic to 'vnet-1' | This option **isn't selected by default**. </br></br> - Select **Allow gateway in 'vnet-2' to forward traffic to 'vnet-1'** if you want **vnet-1** to receive traffic from **vnet-2**'s gateway/Route Server. **vnet-2** must contain a gateway in order for this option to be enabled. |
+ | Enable 'vnet-2' to use 'vnet-1's' remote gateway | This option **isn't selected by default.** </br></br> - Select **Enable 'vnet-2' to use 'vnet-1' remote gateway** if you want **vnet-2** to use **vnet-1**'s gateway or Route Server. **vnet-2** can only use a remote gateway or Route Server from one peering connection. **vnet-1** has to have a gateway or Route Server in order for you to select this option. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br></br> You can also select this option, if you want this virtual network to use the remote Route Server to exchange routes, see [Azure Route Server](../route-server/overview.md). </br></br> This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
:::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page.":::
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
This section explains the route selection algorithm in a virtual hub along with
**Things to note:** * When there are multiple virtual hubs in a Virtual WAN scenario, a virtual hub selects the best routes using the route selection algorithm described above, and then advertises them to the other virtual hubs in the virtual WAN.
-* For a given set of destination route-prefixes, if the ExpressRoute routes are preferred and the ExpressRoute connection subsequently goes down, then routes from S2S VPN or SD-WAN NVA connections will be preferred for traffic destined to the same route-prefixes. When the ExpressRoute connection is restored, traffic destined for these route-prefixes will continue to prefer the S2S VPN or SD-WAN NVA connections.
+* For a given set of destination route-prefixes, if the ExpressRoute routes are preferred and the ExpressRoute connection subsequently goes down, then routes from S2S VPN or SD-WAN NVA connections will be preferred for traffic destined to the same route-prefixes. When the ExpressRoute connection is restored, traffic destined for these route-prefixes may continue to prefer the S2S VPN or SD-WAN NVA connections. To prevent this from happening, you need to configure your on-premises device to utilize AS-Path prepending for the routes being advertised to your S2S VPN Gateway and SD-WAN NVA, as you need to ensure the AS-Path length is longer for VPN/NVA routes than ExpressRoute routes.
## Routing scenarios
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
The following metric is available for virtual hub router within a virtual hub:
| Metric | Description| | | |
-| **Virtual Hub Data Processed** | Data on how much traffic traverses the virtual hub router in a given time period. Note that only the following flows use the virtual hub router: VNet to VNet (same hub) and VPN/ExpressRoute branch to VNet (interhub).|
+| **Virtual Hub Data Processed** | Data on how much traffic traverses the virtual hub router in a given time period. Note that only the following flows use the virtual hub router: VNet to VNet (same hub and interhub) and VPN/ExpressRoute branch to VNet (interhub). If a virtual hub is secured with routing intent, then these flows will traverse the firewall instead of the hub router. |
#### PowerShell steps
web-application-firewall Tutorial Restrict Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
Replace \<username> and \<password> with your values before you run this.
az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \
- --image UbuntuLTS \
+ --image Ubuntu2204 \
--admin-username <username> \ --admin-password <password> \ --instance-count 2 \