Updates from: 04/22/2023 01:15:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Use the checklist to onboard your application quickly and customers have a smoot
> * Establish engineering and support contacts to guide customers post gallery onboarding (Required) > * 3 Non-expiring test credentials for your application (Required) > * Support the OAuth authorization code grant or a long lived token as described in the example (Required)
+> * OIDC apps must have at least 1 role (custom or default) defined
> * Establish an engineering and support point of contact to support customers post gallery onboarding (Required) > * [Support schema discovery (required)](https://tools.ietf.org/html/rfc7643#section-6) > * Support updating multiple group memberships with a single PATCH
active-directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/app-proxy-protect-ndes.md
Azure AD Application Proxy is built on Azure. It gives you a massive amount of n
1. Select **+Add** to save your application. 1. Test whether you can access your NDES server via the Azure AD Application proxy by pasting the link you copied in step 15 into a browser. You should see a default IIS welcome page.- 1. As a final test, add the *mscep.dll* path to the existing URL you pasted in the previous step:-
- `https://scep-test93635307549127448334.msappproxy.net/certsrv/mscep/mscep.dll`
-
+ `https://scep-test93635307549127448334.msappproxy.net/certsrv/mscep/mscep.dll`
1. You should see an **HTTP Error 403 ΓÇô Forbidden** response.- 1. Change the NDES URL provided (via Microsoft Intune) to devices. This change could either be in Microsoft Configuration Manager or the Microsoft Intune admin center.-
- * For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge.
- * For Intune standalone, either edit or create a new SCEP policy and add the new URL.
+ - For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge.
+ - For Intune standalone, either edit or create a new SCEP policy and add the new URL.
## Next steps
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
The Authenticator app can help prevent unauthorized access to accounts and stop
![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png)
+In some rare instances where the relevant Google or Apple service responsible for push notifications is down, users may not receive their push notifications. In these cases users should manually navigate to the Microsoft Authenticator app (or relevant companion app like Outlook), refresh by either pulling down or hitting the refresh button, and approve the request.
+ > [!NOTE] > If your organization has staff working in or traveling to China, the *Notification through mobile app* method on Android devices doesn't work in that country/region as Google play services(including push notifications) are blocked in the region. However iOS notification do work. For Android devices ,alternate authentication methods should be made available for those users.
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
If enabled for Authenticator Lite, users are prompted to register their account
GET auditLogs/signIns ```
-If the sign-in was done by phone app notification, under **authenticationAppDeivceDetails** the **clientApp** field returns **microsoftAuthenticator** or **Outlook**.
+If the sign-in was done by phone app notification, under **authenticationAppDeviceDetails** the **clientApp** field returns **microsoftAuthenticator** or **Outlook**.
If a user has registered Authenticator Lite, the userΓÇÖs registered authentication methods include **Microsoft Authenticator (in Outlook)**.
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Previously updated : 03/13/2023 Last updated : 04/18/2023
-# Microsoft Enterprise SSO plug-in for Apple devices (preview)
-
-> [!IMPORTANT]
-> This feature is in public preview. This preview is provided without a service-level agreement. For more information, see [Supplemental terms of use for Microsoft Azure public previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Microsoft Enterprise SSO plug-in for Apple devices
The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [enterprise single sign-on](https://developer.apple.com/documentation/authenticationservices) feature. The plug-in provides SSO for even old applications that your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection available.
To use the Microsoft Enterprise SSO plug-in for Apple devices:
### iOS requirements - iOS 13.0 or higher must be installed on the device.-- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications are the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. This app is the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
### macOS requirements - macOS 10.15 or higher must be installed on the device. -- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications include the [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp).
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. This app is the [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp).
## Enable the SSO plug-in
active-directory Spa Quickstart Portal Angular Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-angular-ciam.md
+
+ Title: "Quickstart: Add sign in to a Angular SPA"
+description: Learn how to run a sample Angular SPA to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for Angular SPA
+
+> In this quickstart, you download and run a code sample that demonstrates how a Angular single-page application (SPA) can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
+>
+> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> ```console
+> npm install && npm start
+> ```
+> 1. Open your browser, visit `http://locahost:4200`, select **Sign-in**, then follow the prompts.
+>
active-directory Spa Quickstart Portal React Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-react-ciam.md
+
+ Title: "Quickstart: Add sign in to a React SPA"
+description: Learn how to run a sample React SPA to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for React SPA
+
+> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
+>
+> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> ```console
+> npm install && npm start
+> ```
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts.
+>
active-directory Spa Quickstart Portal Vanilla Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-vanilla-js-ciam.md
+
+ Title: "Quickstart: Add sign in to a JavaScript SPA"
+description: Learn how to run a sample JavaScript SPA to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for JavaScript application
+
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript SPA can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
+>
+> 1. Unzip the sample, `cd` into the app root folder, then run the following commands:
+> ```console
+> cd App && npm install && npm start
+> ```
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts.
+>
active-directory Web App Quickstart Portal Dotnet Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-dotnet-ciam.md
+
+ Title: "Quickstart: Add sign in to ASP.NET web app"
+description: Learn how to run a sample ASP.NET web app to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for ASP.NET web app
+
+> In this quickstart, you download and run a code sample that demonstrates how ASP.NET web app can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed Make sure you've installed [.NET SDK v7](https://dotnet.microsoft.com/download/dotnet/7.0) or later.
+>
+> 1. Unzip the sample, `cd` into the app root folder, then run the following command:
+> ```console
+> dotnet run
+> ```
+> 1. Open your browser, visit `https://locahost:7274`, select **Sign-in**, then follow the prompts.
+>
active-directory Web App Quickstart Portal Node Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md
Title: "Quickstart: Add sign in to a React SPA"
-description: Learn how to run a sample React SPA to sign in users
+ Title: "Quickstart: Add sign in to a Node.js/Express web app"
+description: Learn how to run a sample Node.js/Express web app to sign in users
Previously updated : 04/12/2023 Last updated : 05/05/2023
-# Portal quickstart for React SPA
+# Portal quickstart for Node.js/Express web app
-> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM.
+> In this quickstart, you download and run a code sample that demonstrates how a Node.js/Express web app can sign in users with Azure Active Directory for customers.
> > [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
Last updated 04/12/2023
> ```console > npm install && npm start > ```
-> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in** link, then follow the prompts.
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts.
>
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
A PRT is invalidated in the following scenarios:
* **Invalid user**: If a user is deleted or disabled in Azure AD, their PRT is invalidated and can't be used to obtain tokens for applications. If a deleted or disabled user already signed in to a device before, cached sign-in would log them in, until CloudAP is aware of their invalid state. Once CloudAP determines that the user is invalid, it blocks subsequent logons. An invalid user is automatically blocked from sign in to new devices that donΓÇÖt have their credentials cached. * **Invalid device**: If a device is deleted or disabled in Azure AD, the PRT obtained on that device is invalidated and can't be used to obtain tokens for other applications. If a user is already signed in to an invalid device, they can continue to do so. But all tokens on the device are invalidated and the user doesn't have SSO to any resources from that device.
-* **Password change**: After a user changes their password, the PRT obtained with the previous password is invalidated by Azure AD. Password change results in the user getting a new PRT. This invalidation can happen in two different ways:
+* **Password change**: If a user obtained the PRT with their password, the PRT is invalidated by Azure AD when the user changes their password. Password change results in the user getting a new PRT. This invalidation can happen in two different ways:
* If user signs in to Windows with their new password, CloudAP discards the old PRT and requests Azure AD to issue a new PRT with their new password. If user doesn't have an internet connection, the new password can't be validated, Windows may require the user to enter their old password. * If a user has logged in with their old password or changed their password after signing into Windows, the old PRT is used for any WAM-based token requests. In this scenario, the user is prompted to reauthenticate during the WAM token request and a new PRT is issued. * **TPM issues**: Sometimes, a deviceΓÇÖs TPM can falter or fail, leading to inaccessibility of keys secured by the TPM. In this case, the device is incapable of getting a PRT or requesting tokens using an existing PRT as it can't prove possession of the cryptographic keys. As a result, any existing PRT is invalidated by Azure AD. When Windows 10 detects a failure, it initiates a recovery flow to re-register the device with new cryptographic keys. With Hybrid Azure Ad join, just like the initial registration, the recovery happens silently without user input. For Azure AD joined or Azure AD registered devices, the recovery needs to be performed by a user who has administrator privileges on the device. In this scenario, the recovery flow is initiated by a Windows prompt that guides the user to successfully recover the device.
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
+
+ Title: Use Windows Local Administrator Password Solution (LAPS) with Azure AD (preview)
+description: Manage your device's local administrator password with Azure AD LAPS.
+++++ Last updated : 04/21/2023++++++++
+# Windows Local Administrator Password Solution in Azure AD (preview)
+
+> [!IMPORTANT]
+> Azure AD support for Windows Local Administrator Password Solution is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Every Windows device comes with a built-in local administrator account that you must secure and protect to mitigate any Pass-the-Hash (PtH) and lateral traversal attacks. Many customers have been using our standalone, on-premises [Local Administrator Password Solution (LAPS)](https://www.microsoft.com/download/details.aspx?id=46899) product for local administrator password management of their domain joined Windows machines. With Azure AD support for Windows LAPS, we're providing a consistent experience for both Azure AD joined and hybrid Azure AD joined devices.
+
+Azure AD support for LAPS includes the following capabilities:
+
+- **Enabling Windows LAPS with Azure AD** - Enable a tenant wide policy and a client-side policy to backup local administrator password to Azure AD.
+- **Local administrator password management** - Configure client-side policies to set account name, password age, length, complexity, manual password reset and so on.
+- **Recovering local administrator password** - Use API/Portal experiences for local administrator password recovery.
+- **Enumerating all Windows LAPS enabled devices** - Use API/Portal experiences to enumerate all Windows devices in Azure AD enabled with Windows LAPS.
+- **Authorization of local administrator password recovery** - Use role based access control (RBAC) policies with custom roles and administrative units.
+- **Auditing local administrator password update and recovery** - Use audit logs API/Portal experiences to monitor password update and recovery events.
+- **Conditional Access policies for local administrator password recovery** - Configure Conditional Access policies on directory roles that have the authorization of password recovery.
+
+> [!NOTE]
+> Windows LAPS with Azure AD is not supported for Windows devices that are [Azure AD registered](concept-azure-ad-register.md).
+
+Local Administrator Password Solution isn't supported on non-Windows platforms.
+
+To learn about Windows LAPS in more detail, start with the following articles in the Windows documentation:
+
+- [What is Windows LAPS?](/windows-server/identity/laps/laps-scenarios-azure-active-directory) ΓÇô Introduction to Windows LAPS and the Windows LAPS documentation set.
+- [Windows LAPS CSP](/windows/client-management/mdm/laps-csp) ΓÇô View the full details for LAPS settings and options. Intune policy for LAPS uses these settings to configure the LAPS CSP on devices.
+- [Microsoft Intune support for Windows LAPS](/mem/intune/protect/windows-laps-overview)
+- [Windows LAPS architecture](/windows-server/identity/laps/laps-concepts#windows-laps-architecture)
+
+## Requirements
+
+### Supported Azure regions and Windows distributions
+
+This feature is now available in the following Azure clouds:
+
+- Azure Global
+- Azure Government
+- Azure China 21Vianet
+
+### Operating system updates
+
+This feature is now available on the following Windows OS platforms with the specified update or later installed:
+
+- [Windows 11 22H2 - April 11 2023 Update](https://support.microsoft.com/help/5025239)
+- [Windows 11 21H2 - April 11 2023 Update](https://support.microsoft.com/help/5025224)
+- [Windows 10 20H2, 21H2 and 22H2 - April 11 2023 Update](https://support.microsoft.com/help/5025221)
+- [Windows Server 2022 - April 11 2023 Update](https://support.microsoft.com/help/5025230)
+- [Windows Server 2019 - April 11 2023 Update](https://support.microsoft.com/help/5025229)
+
+### Join types
+
+LAPS is supported on Azure AD joined or hybrid Azure AD joined devices only. Azure AD registered devices aren't supported.
+
+### License requirements
+
+LAPS is available to all customers with Azure AD Free or higher licenses. Other related features like administrative units, custom roles, Conditional Access, and Intune have other licensing requirements.
+
+### Required roles or permission
+
+Other than the built-in Azure AD roles of Cloud Device Administrator, Intune Administrator, and Global Administrator that are granted *device.LocalCredentials.Read.All*, you can use [Azure AD custom roles](/azure/active-directory/roles/custom-create) or administrative units to authorize local administrator password recovery. For example,
+
+- Custom roles must be assigned the *microsoft.directory/deviceLocalCredentials/password/read* permission to authorize local administrator password recovery. During the preview, you must create a custom role and grant permissions using the [Microsoft Graph API](/azure/active-directory/roles/custom-create#create-a-role-with-the-microsoft-graph-api) or [PowerShell](/azure/active-directory/roles/custom-create#create-a-role-using-powershell). Once you have created the custom role, you can assign it to users.
+
+- You can also create an Azure AD [administrative unit](/azure/active-directory/roles/administrative-units), add devices, and assign the Cloud Device Administrator role scoped to the administrative unit to authorize local administrator password recovery.
+
+## Enabling Windows LAPS with Azure AD
+
+To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the devices you wish to manage. We recommend organizations [manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy). However, if your devices are Azure AD joined but you're not using Microsoft Intune or Microsoft Intune isn't supported (like for Windows Server 2019/2022), you can still deploy Windows LAPS for Azure AD manually. For more information, see the article [Configure Windows LAPS policy settings](/windows-server/identity/laps/laps-management-policy-settings).
+
+1. Sign in to the **Azure portal** as a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Browse to **Azure Active Directory** > **Devices** > **Device settings**
+1. Select **Yes** for the Enable Local Administrator Password Solution (LAPS) setting and select **Save**. You may also use the Microsoft Graph API [Update deviceRegistrationPolicy](/graph/api/deviceregistrationpolicy-update?view=graph-rest-beta&preserve-view=true).
+1. Configure a client-side policy and set the **BackUpDirectory** to be Azure AD.
+
+ - If you're using Microsoft Intune to manage client side policies, see [Manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy)
+ - If you're using Group Policy Objects (GPO) to manage client side policies, see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy)
+
+## Recovering local administrator password
+
+To view the local administrator password for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read.All* permission, and you must be assigned one of the following roles:
+
+- [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator)
+- [Intune Service Administrator](../roles/permissions-reference.md#intune-administrator)
+- [Global Administrator](../roles/permissions-reference.md#global-administrator)
+
+You can also use Microsoft Graph API [Get deviceLocalCredentialInfo](/graph/api/devicelocalcredentialinfo-get?view=graph-rest-beta&preserve-view=true) to recover local administrative password. If you use the Microsoft Graph API, the password returned is in Base64 encoded value that you need to decode before using it.
+
+## List all Windows LAPS enable devices
+
+To list all Windows LAPS enabled devices in Azure AD, you can browse to **Azure Active Directory** > **Devices** > **Local administrator password recovery (Preview)** or use the Microsoft Graph API.
+
+## Auditing local administrator password update and recovery
+
+To view audit events, you can browse to **Azure Active Directory** > **Devices** > **Audit logs**, then use the **Activity** filter and search for **Update device local administrator password** or **Recover device local administrator password** to view the audit events.
+
+## Conditional Access policies for local administrator password recovery
+
+Conditional Access policies can be scoped to the built-in roles like Cloud Device Administrator, Intune Administrator, and Global Administrator to protect access to recover local administrator passwords. You can find an example of a policy that requires multifactor authentication in the article, [Common Conditional Access policy: Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md).
+
+> [!NOTE]
+> Other role types including administrative unit-scoped roles and custom roles aren't supported
+
+## Frequently asked questions
+
+### Is Windows LAPS with Azure AD management configuration supported using Group Policy Objects (GPO)?
+
+Yes, for [hybrid Azure AD joined](concept-azure-ad-join-hybrid.md) devices only. See see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy).
+
+### Is Windows LAPS with Azure AD management configuration supported using MDM?
+
+Yes, for [Azure AD join](concept-azure-ad-join.md)/[hybrid Azure AD join](concept-azure-ad-join-hybrid.md) ([co-managed](/mem/configmgr/comanage/overview)) devices. Customers can use [Microsoft Intune](/mem/intune/protect/windows-laps-overview) or any other third party MDM of their choice.
+
+### What happens when a device is deleted in Azure AD?
+
+When a device is deleted in Azure AD, the LAPS credential that was tied to that device is lost and the password that is stored in Azure AD is lost. Unless you have a custom workflow to retrieve LAPS passwords and store them externally, there's no method in Azure AD to recover the LAPS managed password for a deleted device.
+
+### What roles are needed to recover LAPS passwords?
+
+The following built-in roles Azure AD roles have permission to recover LAPS passwords: Global Administrator, Cloud Device Administrator, and Intune Administrator.
+
+### What roles are needed to read LAPS metadata?
+
+The following built-in roles are supported to view metadata about LAPS including the device name, last password rotation, and next password rotation: Global Administrator, Cloud Device Administrator, Intune Administrator, Helpdesk Administrator, Security Reader, Security Administrator, and Global Reader.
+
+### Are custom roles supported?
+
+Yes. If you have Azure AD Premium, you can create a custom role with the following RBAC permissions:
+
+- To read LAPS metadata: *microsoft.directory/deviceLocalCredentials/standard/read*
+- To read LAPS passwords: *microsoft.directory/deviceLocalCredentials/password/read*
+
+### What happens when the local administrator account specified by policy is changed?
+
+Because Windows LAPS can only manage one local admin account on a device at a time, the original account is no longer managed by LAPS policy. If policy has the device back up that account, the new account is backed up and details about the previous account are no longer available from within the Intune admin center or from the Directory that is specified to store the account information.
+
+## Next steps
+
+- [Choosing a device identity](overview.md#modern-device-scenario)
+- [Microsoft Intune support for Windows LAPS](/mem/intune/protect/windows-laps-overview)
+- [Create policy for LAPS](/mem/intune/protect/windows-laps-policy)
+- [View reports for LAPS](/mem/intune/protect/windows-laps-reports)
+- [Account protection policy for endpoint security in Intune](/mem/intune/protect/endpoint-security-account-protection-policy)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 04/17/2023 Last updated : 04/20/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on April 17th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on April 20th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | Microsoft Defender for Endpoint | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint P1 | DEFENDER_ENDPOINT_P1 | 16a55f2f-ff35-4cd5-9146-fb784e3761a5 | Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) |
+| Microsoft Defender for Endpoint P1 for EDU | DEFENDER_ENDPOINT_P1_EDU | bba890d4-7881-4584-8102-0c3fdfb739a7 | MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) |
| Microsoft Defender for Endpoint P2_XPLAT | MDATP_XPLAT | b126b073-72db-4a9d-87a4-b17afe41d4ab | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Office 365 (Plan 1) Faculty | ATP_ENTERPRISE_FACULTY | 26ad4b5c-b686-462e-84b9-d7c22b46837f | ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Stream | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | | Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | | Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) |
-| Microsoft Teams Audio Conferencing select dial-out | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) |
+| Microsoft Teams Audio Conferencing with dial-out to USA/CAN | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) |
| Microsoft Teams (Free) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | Microsoft Teams Essentials | Teams_Ess | fde42873-30b6-436b-b361-21af5a6b84ae | TeamsEss (f4f2f6de-6830-442b-a433-e92249faebe2) | Microsoft Teams Essentials (f4f2f6de-6830-442b-a433-e92249faebe2) | | Microsoft Teams Essentials (AAD Identity) | TEAMS_ESSENTIALS_AAD | 3ab6abff-666f-4424-bfb7-f0bc274ec7bc | EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>ONEDRIVE_BASIC_P2 (4495894f-534f-41ca-9d3b-0ebf1220a423)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf) | Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OneDrive for Business (Basic 2) (4495894f-534f-41ca-9d3b-0ebf1220a423)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf) |
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
Previously updated : 10/12/2022 Last updated : 04/21/2023
# Add Azure Active Directory B2B collaboration users in the Azure portal
-As a user who is assigned any of the limited administrator directory roles, you can use the Azure portal to invite B2B collaboration users. You can invite guest users to the directory, to a group, or to an application. After you invite a user through any of these methods, the invited user's account is added to Azure Active Directory (Azure AD), with a user type of *Guest*. The guest user must then redeem their invitation to access resources. An invitation of a user does not expire.
+As a user who is assigned any of the limited administrator directory roles, you can use the Azure portal to invite B2B collaboration users. You can invite guest users to the directory, to a group, or to an application. After you invite a user through any of these methods, the invited user's account is added to Azure Active Directory (Azure AD), with a user type of *Guest*. The guest user must then redeem their invitation to access resources. An invitation of a user doesn't expire.
After you add a guest user to the directory, you can either send the guest user a direct link to a shared app, or the guest user can select the redemption URL in the invitation email. For more information about the redemption process, see [B2B collaboration invitation redemption](redemption-experience.md). > [!IMPORTANT] > You should follow the steps in [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md) to add the URL of your organization's privacy statement. As part of the first time invitation redemption process, an invited user must consent to your privacy terms to continue.
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users-azure-active-directory.md) article.
+ ## Before you begin Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. By default, all users and admins can invite guests. But your organization's external collaboration policies might be configured to prevent certain types of users or admins from inviting guests. To find out how to view and set these policies, see [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md).
Make sure your organization's external collaboration settings are configured suc
To add B2B collaboration users to the directory, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a user who is assigned a limited administrator directory role or the Guest Inviter role.
-2. Search for and select **Azure Active Directory** from any page.
-3. Under **Manage**, select **Users**.
-4. Select **New user** > **Invite external user**. (Or, if you're using the legacy experience, select **New guest user**).
-5. On the **New user** page, select **Invite user** and then add the guest user's information.
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users.
+
+1. Navigate to **Azure Active Directory** > **Users**.
+
+1. Select **Invite external user** from the menu.
+
+ ![Screenshot of the invite external user menu option.](media/add-users-administrator/invite-external-user-menu.png)
+
+### Basics
+
+In this section, you're inviting the guest to your tenant using *their email address*. If you need to create a guest user with a domain account, use the [create new user process](../fundamentals/how-to-create-delete-users.md#create-a-new-user) but change the **User type** to **Guest**.
+
+- **Email**: Enter the email address for the guest user you're inviting.
+
+- **Display name**: Provide the display name.
+
+- **Invitation message**: Select the **Send invite message** checkbox to customize a brief message to the guest. Provide a Cc recipient, if necessary.
+
+![Screenshot of the invite external user Basics tab.](media/add-users-administrator/invite-external-user-basics-tab.png)
+
+Either select the **Review + invite** button to create the new user or **Next: Properties** to complete the next section.
+
+### Properties
+
+There are six categories of user properties you can provide. These properties can be added or updated after the user is created. To manage these details, go to **Azure AD** > **Users** and select a user to update.
+
+- **Identity:** Enter the user's first and last name. Set the User type as either Member or Guest. For more information about the difference between external guests and members, see [B2B collaboration user properties](user-properties.md)
+
+- **Job information:** Add any job-related information, such as the user's job title, department, or manager.
+
+- **Contact information:** Add any relevant contact information for the user.
+
+- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority.
+
+- **Settings:** Specify the user's global location.
+
+Either select the **Review + invite** button to create the new user or **Next: Assignments** to complete the next section.
+
+### Assignments
- ![Screenshot showing the new user page.](media/add-users-administrator/invite-user.png)
+You can assign external users to a group, or Azure AD role when the account is created. You can assign the user to up to 20 groups or roles. Group and role assignments can be added after the user is created. The **Privileged Role Administrator** role is required to assign Azure AD roles.
- - **Name.** The first and last name of the guest user.
- - **Email address (required)**. The email address of the guest user.
- - **Personal message (optional)** Include a personal welcome message to the guest user.
- - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**. [Learn more](../../role-based-access-control/role-assignments-external-users.md) about Azure roles for external guest users.
+**To assign a group to the new user**:
+
+1. Select **+ Add group**.
+1. From the menu that appears, choose up to 20 groups from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+ ![Screenshot of the add group assignment process.](media/add-users-administrator/invite-external-user-assignments-tab.png)
+
+**To assign a role to the new user**:
+
+1. Select **+ Add role**.
+1. From the menu that appears, choose up to 20 roles from the list and select the **Select** button.
+1. Select the **Review + invite** button.
+
+### Review and create
+
+The final tab captures several key details from the user creation process. Review the details and select the **Invite** button if everything looks good. An email invitation is automatically sent to the user. After you send the invitation, the user account is automatically added to the directory as a guest.
+
+ ![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+
+### External user invitations
+<a name="resend-invitations-to-guest-users"></a>
+
+When you invite an external guest user by sending an email invitation, you can check the status of the invitation from the user's details. If they haven't redeemed their invitation, you can resend the invitation email.
+
+1. Go to **Azure AD** > **Users** and select the invited guest user.
+1. In the **My Feed** section, locate the **B2B collaboration** tile.
+ - If the invitation state is **PendingAcceptance**, select the **Resend invitation** link to send another email and follow the prompts.
+ - You can also select the **Properties** for the user and view the **Invitation state**.
+
+![Screenshot of the My Feed section of the user overview page.](media/add-users-administrator/external-user-invitation-state.png)
> [!NOTE] > Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol.
-6. Select **Invite** to automatically send the invitation to the guest user.
-
-After you send the invitation, the user account is automatically added to the directory as a guest.
- ![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+The user is added to your directory with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example: *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations. ([Learn more about B2B collaboration user properties](user-properties.md).)
-The user is added to your directory with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*, for example, *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations. ([Learn more about B2B collaboration user properties](user-properties.md).)
## Add guest users to a group
-If you need to manually add B2B collaboration users to a group, follow these steps:
+
+If you need to manually add B2B collaboration users to a group after the user was invited, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Search for and select **Azure Active Directory** from any page.
If you need to manually add B2B collaboration users to a group, follow these ste
4. Select a group (or select **New group** to create a new one). It's a good idea to include in the group description that the group contains B2B guest users. 5. Under **Manage**, select **Members**. 6. Select **Add members**.
-7. Do one of the following:
+7. Complete one of the following set of steps:
- *If the guest user is already in the directory:*
To add B2B collaboration users to an application, follow these steps:
5. Under **Manage**, select **Users and groups**. 6. Select **Add user/group**. 7. On the **Add Assignment** page, select the link under **Users**.
-8. Do one of the following:
+8. Complete one of the following set of steps:
- *If the guest user is already in the directory:*
To add B2B collaboration users to an application, follow these steps:
d. Select **Assign**.
-## Resend invitations to guest users
-
-If a guest user hasn't yet redeemed their invitation, you can resend the invitation email.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator.
-2. Search for and select **Azure Active Directory** from any page.
-3. Under **Manage**, select **Users**.
-4. In the list, select the user's name to open their user profile.
-5. Under **My Feed**, in the **B2B collaboration** tile, select the **Manage (resend invitation / reset status** link.
-6. If the user hasn't yet accepted the invitation, Select the **Yes** option to resend.
-
- ![Screenshot showing the Resend Invite radio button.](./media/add-users-administrator/resend-invitation.png)
-
-7. In the confirmation message, select **Yes** to confirm that you want to send the user a new email invitation for redeeming their guest account. An invitation URL will be generated and sent to the user.
- ## Next steps - To learn how non-Azure AD admins can add B2B guest users, see [How users in your organization can invite guest users to an app](add-users-information-worker.md)
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Previously updated : 02/16/2023 Last updated : 04/21/2023
In this quickstart, you'll learn how to add a new guest user to your Azure AD di
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users-azure-active-directory.md) article.
+ ## Prerequisites To complete the scenario in this quickstart, you need: -- A role that allows you to create users in your tenant directory, such as the Global Administrator role or a limited administrator directory role (for example, Guest inviter or User administrator).
+- A role that allows you to create users in your tenant directory, such as the Global Administrator role or a limited administrator directory role such as Guest Inviter or User Administrator.
- Access to a valid email address outside of your Azure AD tenant, such as a separate work, school, or social email address. You'll use this email to create the guest account in your tenant directory and access the invitation.
-## Add a new guest user in Azure AD
+## Invite an external guest user
-1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that's been assigned the Global administrator, Guest, inviter, or User administrator role.
+This quickstart guide provides the basic steps to invite an external user. To learn about all of the properties and settings that you can include when you invite an external user, see [How to create and delete a user](../fundamentals/how-to-create-delete-users.md).
-1. Under **Azure services**, select **Azure Active Directory** (or use the search box to find and select **Azure Active Directory**).
+1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the roles listed in the Prerequisites.
- :::image type="content" source="media/quickstart-add-users-portal/azure-active-directory-service.png" alt-text="Screenshot showing where to select the Azure Active Directory service.":::
+1. Navigate to **Azure Active Directory** > **Users**.
-1. Under **Manage**, select **Users**.
+1. Select **Invite external user** from the menu.
+
+ ![Screenshot of the invite external user menu option.](media/quickstart-add-users-portal/invite-external-user-menu.png)
+
+### Basics for external users
- :::image type="content" source="media/quickstart-add-users-portal/quickstart-users-portal-user.png" alt-text="Screenshot showing where to select the Users option.":::
+In this section, you're inviting the guest to your tenant using *their email address*. For this quickstart, enter an email address that you can access.
-1. Under **New user** select **Invite external user**.
+- **Email**: Enter the email address for the guest user you're inviting.
- :::image type="content" source="media/quickstart-add-users-portal/new-guest-user.png" alt-text="Screenshot showing where to select the New guest user option.":::
+- **Display name**: Provide the display name.
-1. On the **New user** page, select **Invite user** and then add the guest user's information.
+- **Invitation message**: Select the **Send invite message** checkbox to customize a brief message to preview how the invitation message appears.
- - **Name.** The first and last name of the guest user.
- - **Email address (required)**. The email address of the guest user.
- - **Personal message (optional)** Include a personal welcome message to the guest user.
- - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
+![Screenshot of the invite external user Basics tab.](media/quickstart-add-users-portal/invite-external-user-basics-tab.png)
- :::image type="content" source="media/quickstart-add-users-portal/invite-user.png" alt-text="Screenshot showing the new user page.":::
+Select the **Review and invite** button to finalize the process.
-1. Select **Invite** to automatically send the invitation to the guest user. A notification appears in the upper right with the message **Successfully invited user**.
+### Review and invite
+
+The final tab captures several key details from the user creation process. Review the details and select the **Invite** button if everything looks good.
+
+An email invitation is sent automatically.
1. After you send the invitation, the user account is automatically added to the directory as a guest. :::image type="content" source="media/quickstart-add-users-portal/new-guest-user-directory.png" alt-text="Screenshot showing the new guest user in the directory."::: - ## Accept the invitation Now sign in as the guest user to see the invitation.
Now sign in as the guest user to see the invitation.
:::image type="content" source="media/quickstart-add-users-portal/quickstart-users-portal-email-small.png" alt-text="Screenshot showing the B2B invitation email."::: - 1. In the email body, select **Accept invitation**. A **Review permissions** page opens in the browser. :::image type="content" source="media/quickstart-add-users-portal/consent-screen.png" alt-text="Screenshot showing the Review permissions page.":::
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
Connect-MgGraph -Scopes "User.ReadWrite.All"
$user = Get-MgUser -Filter "startsWith(mail, 'john.doe@fabrikam.net')" New-MgInvitation ` -InvitedUserEmailAddress $user.Mail `
- -InviteRedirectUrl "http://myapps.microsoft.com" `
+ -InviteRedirectUrl "https://myapps.microsoft.com" `
-ResetRedemption ` -SendInvitationMessage ` -InvitedUser $user
active-directory Certificate Authorities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/certificate-authorities.md
- Title: Azure Active Directory certificate authorities
-description: Listing of trusted certificates used in Azure
-------- Previously updated : 10/10/2020------
-# Certificate authorities used by Azure Active Directory
-
-> [!IMPORTANT]
-> The information in this page is relevant only to entities that explicitly specify a list of acceptable Certificate Authorities (CAs). This practice, known as certificate pinning, should be avoided unless there are no other options.
-
-Any entity trying to access Azure Active Directory (Azure AD) identity services via the TLS/SSL protocols will be presented with certificates from the CAs listed below. If the entity trusts those CAs, it may use the certificates to verify the identity and legitimacy of the identity services and establish secure connections.
-
-Certificate Authorities can be classified into root CAs and intermediate CAs. Typically, root CAs have one or more associated intermediate CAs. This article lists the root CAs used by Azure AD identity services and the intermediate CAs associated with each of those roots. For each CA, we include Uniform Resource Identifiers (URIs) to download the associated Authority Information Access (AIA) and the Certificate Revocation List Distribution Point (CDP) files. When appropriate, we also provide a URI to the Online Certificate Status Protocol (OCSP) endpoint.
-
-## CAs used in Azure Public and Azure US Government clouds
-
-Different services may use different root or intermediate CAs. Therefore all entries listed below may be required.
-
-### DigiCert Global Root G2
--
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - |- |-|-|-|-|
-| DigiCert Global Root G2| 033af1e6a711a 9a0bb2864b11d09fae5| August 1, 2013 <br>January 15, 2038| df3c24f9bfd666761b268 073fe06d1cc8d4f82a4| [AIA](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt)<br>[CDP](http://crl3.digicert.com/DigiCertGlobalRootG2.crl) |
--
-#### Associated Intermediate CAs
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| Microsoft Azure TLS Issuing CA 01| 0aafa6c5ca63c45141 ea3be1f7c75317| July 29, 2020<br>June 27, 2024| 2f2877c5d778c31e0f29c 7e371df5471bd673173| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.crl)|
-|Microsoft Azure TLS Issuing CA 02| 0c6ae97cced59983 8690a00a9ea53214| July 29, 2020<br>June 27, 2024| e7eea674ca718e3befd 90858e09f8372ad0ae2aa| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.crl) |
-| Microsoft Azure TLS Issuing CA 05| 0d7bede97d8209967a 52631b8bdd18bd| July 29, 2020<br>June 27, 2024| 6c3af02e7f269aa73a fd0eff2a88a4a1f04ed1e5| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005%20-%20xsign.crt)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.crl) |
-| Microsoft Azure TLS Issuing CA 06| 02e79171fb8021e93fe 2d983834c50c0| July 29, 2020<br>June 27, 2024| 30e01761ab97e59a06b 41ef20af6f2de7ef4f7b0| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.crl) |
--
- ### Baltimore CyberTrust Root
-
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| Baltimore CyberTrust Root| 020000b9| May 12, 2000<br>May 12, 2025| d4de20d05e66fc53fe 1a50882c78db2852cae474|<br>[CDP](http://crl3.digicert.com/Omniroot2025.crl)<br>[OCSP](http://ocsp.digicert.com/) |
--
-#### Associated Intermediate CAs
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| Microsoft RSA TLS CA 01| 703d7a8f0ebf55aaa 59f98eaf4a206004eb2516a| July 21, 2020<br>October 8, 2024| 417e225037fbfaa4f9 5761d5ae729e1aea7e3a42| [AIA](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2001.crt)<br>[CDP](https://mscrl.microsoft.com/pki/mscorp/crl/Microsoft%20RSA%20TLS%20CA%2001.crl)<br>[OCSP](http://ocsp.msocsp.com/) |
-| Microsoft RSA TLS CA 02| b0c2d2d13cdd56cdaa 6ab6e2c04440be4a429c75| July 21, 2020<br>May 20, 2024| 54d9d20239080c32316ed 9ff980a48988f4adf2d| [AIA](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2002.crt)<br>[CDP](https://mscrl.microsoft.com/pki/mscorp/crl/Microsoft%20RSA%20TLS%20CA%2002.crl)<br>[OCSP](http://ocsp.msocsp.com/) |
--
- ### DigiCert Global Root CA
-
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| DigiCert Global Root CA| 083be056904246 b1a1756ac95991c74a| November 9, 2006<br>November 9, 2031| a8985d3a65e5e5c4b2d7 d66d40c6dd2fb19c5436| [CDP](http://crl3.digicert.com/DigiCertGlobalRootCA.crl)<br>[OCSP](http://ocsp.digicert.com/) |
--
-#### Associated Intermediate CAs
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| DigiCert SHA2 Secure Server CA| 01fda3eb6eca75c 888438b724bcfbc91| March 8, 2013 March 8, 2023| 1fb86b1168ec743154062 e8c9cc5b171a4b7ccb4| [AIA](http://cacerts.digicert.com/DigiCertSHA2SecureServerCA.crt)<br>[CDP](http://crl3.digicert.com/ssca-sha2-g6.crl)<br>[OCSP](http://ocsp.digicert.com/) |
-| DigiCert SHA2 Secure Server CA |02742eaa17ca8e21 c717bb1ffcfd0ca0 |September 22, 2020<br>September 22, 2030|626d44e704d1ceabe3bf 0d53397464ac8080142c|[AIA](http://cacerts.digicert.com/DigiCertSHA2SecureServerCA-2.crt)<br>[CDP](http://crl3.digicert.com/DigiCertSHA2SecureServerCA.crl)<br>[OCSP](http://ocsp.digicert.com/)|
--
-## CAs used in Azure China 21Vianet cloud
-
-### DigiCert Global Root CA
--
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| DigiCert Global Root CA| 083be056904246b 1a1756ac95991c74a| Nov. 9, 2006<br>Nov. 9, 2031| a8985d3a65e5e5c4b2d7 d66d40c6dd2fb19c5436| [CDP](http://ocsp.digicert.com/)<br>[OCSP](http://crl3.digicert.com/DigiCertGlobalRootCA.crl) |
--
-#### Associated Intermediate CA
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - | - |
-| DigiCert Basic RSA CN CA G2| 02f7e1f982bad 009aff47dc95741b2f6| March 4, 2020<br>March 4, 2030| 4d1fa5d1fb1ac3917c08e 43f65015e6aea571179| [AIA](http://cacerts.digicert.cn/DigiCertBasicRSACNCAG2.crt)<br>[CDP](http://crl.digicert.cn/DigiCertBasicRSACNCAG2.crl)<br>[OCSP](http://ocsp.digicert.cn/) |
-
-## Next Steps
-[Learn about Microsoft 365 Encryption chains](/microsoft-365/compliance/encryption-office-365-certificate-chains)
active-directory How To Create Delete Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-create-delete-users.md
+
+ Title: Create or delete users
+description: Instructions for how to create new users or delete existing users.
+++++++ Last updated : 04/21/2023++++++
+# How to create, invite, and delete users (preview)
+
+This article explains how to create a new user, invite an external guest, and delete a user in your Azure Active Directory (Azure AD) tenant.
+
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Instructions for the legacy create user process can be found in the [Add or delete users](add-users-azure-active-directory.md) article.
++
+## Before you begin
+
+Before you create or invite a new user, take some time to review the types of users, their authentication methods, and their access within the Azure AD tenant. For example, do you need to create an internal guest, an internal user, or an external guest? Does your new user need guest or member privileges?
+
+- **Internal member**: These users are most likely full-time employees in your organization.
+- **Internal guest**: These users have an account in your tenant, but have guest-level privileges. It's possible they were created within your tenant prior to the availability of B2B collaboration.
+- **External member**: These users authenticate using an external account, but have member access to your tenant. These types of users are common in [multi-tenant organizations](../multi-tenant-organizations/overview.md#what-is-a-multi-tenant-organization).
+- **External guest**: These users are true guests of your tenant who authenticate using an external method and who have guest-level privileges.
+
+For more information abut the differences between internal and external guests and members, see [B2B collaboration properties](../external-identities/user-properties.md).
+
+Authentication methods vary based on the type of user you create. Internal guests and members have credentials in your Azure AD tenant that can be managed by administrators. These users can also reset their own password. External members authenticate to their home Azure AD tenant and your Azure AD tenant authenticates the user through a federated sign-in with the external member's Azure AD tenant. If external members forget their password, the administrator in their Azure AD tenant can reset their password. External guests set up their own password using the link they receive in email when their account is created.
+
+Reviewing the default user permissions may also help you determine the type of user you need to create. For more information, see [Set default user permissions](users-default-permissions.md)
+
+## Required roles
+
+The required role of least privilege varies based on the type of user you're adding and if you need to assign Azure AD roles at the same time. **Global Administrator** can create users and assign roles, but whenever possible you should use the least privileged role.
+
+| Role | Task |
+| -- | -- |
+| Create a new user | User Administrator |
+| Invite an external guest | Guest Inviter |
+| Assign Azure AD roles | Privileged Role Administrator |
+
+## Create a new user
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role.
+
+1. Navigate to **Azure Active Directory** > **Users**.
+
+1. Select **Create new user** from the menu.
+
+ ![Screenshot of the create new user menu.](media/how-to-create-delete-users/create-new-user-menu.png)
+
+### Basics
+
+The **Basics** tab contains the core fields required to create a new user.
+
+- **User principal name**: Enter a unique username and select a domain from the menu after the @ symbol. Select **Domain not listed** if you need to create a new domain. For more information, see [Add your custom domain name](add-custom-domain.md)
+
+- **Mail nickname**: If you need to enter an email nickname that is different from the user principal name you entered, uncheck the **Derive from user principal name** option, then enter the mail nickname.
+
+- **Display name**: Enter the user's name, such as Chris Green or Chris A. Green
+
+- **Password**: Provide a password for the user to use during their initial sign-in. Uncheck the **Auto-generate password** option to enter a different password.
+
+- **Account enabled**: This option is checked by default. Uncheck to prevent the new user from being able to sign-in. You can change this setting after the user is created. This setting was called **Block sign in** in the legacy create user process.
+
+Either select the **Review + create** button to create the new user or **Next: Properties** to complete the next section.
+
+![Screenshot of the create new user Basics tab.](media/how-to-create-delete-users/create-new-user-basics-tab.png)
+
+Either select the **Review + create** button to create the new user or **Next: Properties** to complete the next section.
+
+### Properties
+
+There are six categories of user properties you can provide. These properties can be added or updated after the user is created. To manage these details, go to **Azure AD** > **Users** and select a user to update.
+
+- **Identity:** Enter the user's first and last name. Set the User type as either Member or Guest.
+
+- **Job information:** Add any job-related information, such as the user's job title, department, or manager.
+
+- **Contact information:** Add any relevant contact information for the user.
+
+- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority.
+
+- **Settings:** Specify the user's global location.
+
+Either select the **Review + create** button to create the new user or **Next: Assignments** to complete the next section.
+
+### Assignments
+
+You can assign the user to an administrative unit, group, or Azure AD role when the account is created. You can assign the user to up to 20 groups or roles. You can only assign the user to one administrative unit. Assignments can be added after the user is created.
+
+**To assign a group to the new user**:
+
+1. Select **+ Add group**.
+1. From the menu that appears, choose up to 20 groups from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+ ![Screenshot of the add group assignment process.](media/how-to-create-delete-users/add-group-assignment.png)
+
+**To assign a role to the new user**:
+
+1. Select **+ Add role**.
+1. From the menu that appears, choose up to 20 roles from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+**To add an administrative unit to the new user**:
+
+1. Select **+ Add administrative unit**.
+1. From the menu that appears, choose one administrative unit from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+### Review and create
+
+The final tab captures several key details from the user creation process. Review the details and select the **Create** button if everything looks good.
+
+## Invite an external user
+
+The overall process for inviting an external guest user is similar, except for a few details on the **Basics** tab and the email invitation process. You can't assign external users to administrative units.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users.
+
+1. Navigate to **Azure Active Directory** > **Users**.
+
+1. Select **Invite external user** from the menu.
+
+ ![Screenshot of the invite external user menu option.](media/how-to-create-delete-users/invite-external-user-menu.png)
+
+### Basics for external users
+
+In this section, you're inviting the guest to your tenant using *their email address*. If you need to create a guest user with a domain account, use the [create new user process](#create-a-new-user) but change the **User type** to **Guest**.
+
+- **Email**: Enter the email address for the guest user you're inviting.
+
+- **Display name**: Provide the display name.
+
+- **Invitation message**: Select the **Send invite message** checkbox to customize a brief message to the guest. Provide a Cc recipient, if necessary.
+
+![Screenshot of the invite external user Basics tab.](media/how-to-create-delete-users/invite-external-user-basics-tab.png)
+
+### Guest user invitations
+
+When you invite an external guest user by sending an email invitation, you can check the status of the invitation from the user's details.
+
+1. Go to **Azure AD** > **Users** and select the invited guest user.
+1. In the **My Feed** section, locate the **B2B collaboration** tile.
+ - If the invitation state is **PendingAcceptance**, select the **Resend invitation** link to send another email.
+ - You can also select the **Properties** for the user and view the **Invitation state**.
+
+![Screenshot of the user details with the invitation status options highlighted.](media/how-to-create-delete-users/external-user-invitation-state.png)
+
+## Add other users
+
+There might be scenarios in which you want to manually create consumer accounts in your Azure Active Directory B2C (Azure AD B2C) directory. For more information about creating consumer accounts, see [Create and delete consumer users in Azure AD B2C](../../active-directory-b2c/manage-users-portal.md).
+
+If you have an environment with both Azure Active Directory (cloud) and Windows Server Active Directory (on-premises), you can add new users by syncing the existing user account data. For more information about hybrid environments and users, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
+
+## Delete a user
+
+You can delete an existing user using Azure portal.
+
+- You must have a Global Administrator, Privileged Authentication Administrator, or User Administrator role assignment to delete users in your organization.
+- Global Administrators and Privileged Authentication Administrators can delete any users including other administrators.
+- User Administrators can delete any non-admin users, Helpdesk Administrators, and other User Administrators.
+- For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+
+To delete a user, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the appropriate roles.
+
+1. Go to **Azure Active Directory** > **Users**.
+
+1. Search for and select the user you want to delete from your Azure AD tenant.
+
+1. Select **Delete user**.
+
+ ![Screenshot of the All users page with a user selected and the Delete button highlighted.](media/how-to-create-delete-users/delete-existing-user.png)
+
+The user is deleted and no longer appears on the **Users - All users** page. The user can be seen on the **Deleted users** page for the next 30 days and can be restored during that time. For more information about restoring a user, see [Restore or remove a recently deleted user using Azure Active Directory](active-directory-users-restore.md).
+
+When a user is deleted, any licenses consumed by the user are made available for other users.
+
+>[!Note]
+>To update the identity, contact information, or job information for users whose source of authority is Windows Server Active Directory, you must use Windows Server Active Directory. After you complete the update, you must wait for the next synchronization cycle to complete before you'll see the changes.
+## Next steps
+
+* [Learn about B2B collaboration users](../external-identities/add-users-administrator.md)
+* [Review the default user permissions](users-default-permissions.md)
+* [Add a custom domain](add-custom-domain.md)
active-directory How To Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md
The branding elements are called out in the following example. Text descriptions
1. **Favicon**: Small icon that appears on the left side of the browser tab. 1. **Header logo**: Space across the top of the web page, below the web browser navigation area.
-1. **Background image** and **page background color**: The entire space behind the sign-in box.
+1. **Background image**: The entire space behind the sign-in box.
+1. **Page background color**: The entire space behind the sign-in box.
1. **Banner logo**: The logo that appears in the upper-left corner of the sign-in box. 1. **Username hint and text**: The text that appears before a user enters their information. 1. **Sign-in page text**: Additional text you can add below the username field.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
- Deprecated functionality - Plans for changes ++
+## October 2022
+
+### General Availability - Upgrade Azure AD Provisioning agent to the latest version (version number: 1.1.977.0)
+++
+**Type:** Plan for change
+**Service category:** Provisioning
+**Product capability:** Azure AD Connect Cloud Sync
+
+Microsoft stops support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, make sure you have the latest version of the agent. You can view info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
+
+You can find out which version of the agent you're using as follows:
+
+1. Going to the domain server that you have the agent installed
+1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
+1. Select on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
+
+> [!NOTE]
+> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
+Product governed by the Modern Policy follow a [continuous support and servicing model](/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported. For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
+++
+### General Availability - Add multiple domains to the same SAML/Ws-Fed based identity provider configuration for your external users
+++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+An IT admin can now add multiple domains to a single SAML/WS-Fed identity provider configuration to invite users from multiple domains to authenticate from the same identity provider endpoint. For more information, see: [Federation with SAML/WS-Fed identity providers for guest users](../external-identities/direct-federation.md).
++++
+### General Availability - Limits on the number of configured API permissions for an application registration enforced starting in October 2022
+++
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** Developer Experience
+
+In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit are unable to increase the number of permissions configured for. The existing limit on the number of distinct APIs for permissions required remains unchanged and may not exceed 50 APIs.
+
+In the Azure portal, the required permissions list is under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions list is in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md).
++++
+### Public Preview - Conditional access Authentication strengths
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** User Authentication
+
+We're announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
+++
+### Public Preview - Conditional access authentication strengths for external identities
++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+You can now require your business partner (B2B) guests across all Microsoft clouds to use specific authentication methods to access your resources with **Conditional Access Authentication Strength policies**. For more information, see: [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
++++
+### Generally Availability - Windows Hello for Business, Cloud Kerberos Trust deployment
+++
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
+++
+### General Availability - Device-based conditional access on Linux Desktops
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** SSO
+
+This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
+
+- Users can register their Linux devices with Azure AD
+- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops
+- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.
++
+For more information, see:
+[Azure AD registered devices](../devices/concept-azure-ad-register.md).
+[Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
+++
+### General Availability - Deprecation of Azure Active Directory Multi-Factor Authentication.
+++
+**Type:** Deprecated
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+Beginning September 30, 2024, Azure Active Directory Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their usersΓÇÖ authentication data to the cloud-based Azure Active Directory Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure Active Directory Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md).
+++
+### Public Preview - Lifecycle Workflows is now available
+++
+**Type:** New feature
+**Service category:** Lifecycle Workflows
+**Product capability:** Identity Governance
++
+We're excited to announce the public preview of Lifecycle Workflows, a new Identity Governance capability that allows customers to extend the user provisioning process, and adds enterprise grade user lifecycle management capabilities, in Azure AD to modernize your identity lifecycle management process. With Lifecycle Workflows, you can:
+
+- Confidently configure and deploy custom workflows to onboard and offboard cloud employees at scale replacing your manual processes.
+- Automate out-of-the-box actions critical to required Joiner and Leaver scenarios and get rich reporting insights.
+- Extend workflows via Logic Apps integrations with custom tasks extensions for more complex scenarios.
+
+For more information, see: [What are Lifecycle Workflows? (Public Preview)](../governance/what-are-lifecycle-workflows.md).
+++
+### Public Preview - User-to-Group Affiliation recommendation for group Access Reviews
+++
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and applies the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
+++
+### General Availability - Group assignment for SuccessFactors Writeback application
+++
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+
+When configuring writeback of attributes from Azure AD to SAP SuccessFactors Employee Central, you can now specify the scope of users using Azure AD group assignment. For more information, see: [Tutorial: Configure attribute write-back from Azure AD to SAP SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md).
+++
+### General Availability - Number Matching for Microsoft Authenticator notifications
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
+
+The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting February 27 2023.
++
+For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
+++
+### General Availability - Additional context in Microsoft Authenticator notifications
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following steps:
+
+- Application Context: This feature shows users which application they're signing into.
+- Geographic Location Context: This feature shows users their sign-in location based on the IP address of the device they're signing into.
+
+The feature is available for both MFA and Password-less Phone Sign-in notifications and greatly increases the security posture of the Microsoft Authenticator app. We've also refreshed the Azure portal Admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update, we've also added the highly requested ability for admins to exclude user groups from certain features.
+
+We highly encourage our customers to adopt these critical security features to reduce accidental approvals of Authenticator notifications by end users.
++
+For more information, see: [How to use additional context in Microsoft Authenticator notifications - Authentication methods policy](../authentication/how-to-mfa-additional-context.md).
+++
+### New Federated Apps available in Azure AD Application gallery - October 2022
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+++
+In October 2022 we've added the following 15 new applications in our App gallery with Federation support:
+
+[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++++
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - October 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [LawVu](../saas-apps/lawvu-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++ ## September 2022
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information, see: [How to use number matching in multifactor authentica
Earlier, we announced our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD), enabling our customers to reach the Azure AD services over IPv4, IPv6 or dual stack endpoints. This is just a reminder that we have started introducing IPv6 support into Azure AD services in a phased approach in late March 2023.
-If you utilize Conditional Access or Identity Protection, and have IPv6 enabled on any of your devices, you likely must take action to avoid impacting your users. For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to deprioritize IPv4 in any Azure AD features or services. We'll continue to share additional guidance on IPv6 enablement in Azure AD at this link: [IPv6 support in Azure Active Directory](https://learn.microsoft.com/troubleshoot/azure/active-directory/azure-ad-ipv6-support)
+If you utilize Conditional Access or Identity Protection, and have IPv6 enabled on any of your devices, you likely must take action to avoid impacting your users. For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to deprioritize IPv4 in any Azure AD features or services. We'll continue to share additional guidance on IPv6 enablement in Azure AD at this link: [IPv6 support in Azure Active Directory](/troubleshoot/azure/active-directory/azure-ad-ipv6-support).
Microsoft cloud settings let you collaborate with organizations from different M
- Microsoft Azure commercial and Microsoft Azure Government - Microsoft Azure commercial and Microsoft Azure China 21Vianet
-For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+For more information about Microsoft cloud settings for B2B collaboration, see [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
We continue to share additional guidance on IPv6 enablement in Azure AD at this
-## October 2022
-
-### General Availability - Upgrade Azure AD Provisioning agent to the latest version (version number: 1.1.977.0)
---
-**Type:** Plan for change
-**Service category:** Provisioning
-**Product capability:** Azure AD Connect Cloud Sync
-
-Microsoft stops support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, make sure you have the latest version of the agent. You can view info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
-
-You can find out which version of the agent you're using as follows:
-
-1. Going to the domain server that you have the agent installed
-1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
-1. Select on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
-
-> [!NOTE]
-> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
-Product governed by the Modern Policy follow a [continuous support and servicing model](/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported. For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
---
-### General Availability - Add multiple domains to the same SAML/Ws-Fed based identity provider configuration for your external users
---
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-An IT admin can now add multiple domains to a single SAML/WS-Fed identity provider configuration to invite users from multiple domains to authenticate from the same identity provider endpoint. For more information, see: [Federation with SAML/WS-Fed identity providers for guest users](../external-identities/direct-federation.md).
----
-### General Availability - Limits on the number of configured API permissions for an application registration enforced starting in October 2022
---
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** Developer Experience
-
-In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit are unable to increase the number of permissions configured for. The existing limit on the number of distinct APIs for permissions required remains unchanged and may not exceed 50 APIs.
-
-In the Azure portal, the required permissions list is under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions list is in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md).
----
-### Public Preview - Conditional access Authentication strengths
---
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** User Authentication
-
-We're announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
---
-### Public Preview - Conditional access authentication strengths for external identities
--
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-You can now require your business partner (B2B) guests across all Microsoft clouds to use specific authentication methods to access your resources with **Conditional Access Authentication Strength policies**. For more information, see: [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
----
-### Generally Availability - Windows Hello for Business, Cloud Kerberos Trust deployment
---
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
---
-### General Availability - Device-based conditional access on Linux Desktops
---
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** SSO
-
-This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
--- Users can register their Linux devices with Azure AD-- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops -- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.--
-For more information, see:
-[Azure AD registered devices](../devices/concept-azure-ad-register.md).
-[Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
---
-### General Availability - Deprecation of Azure Active Directory Multi-Factor Authentication.
---
-**Type:** Deprecated
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-Beginning September 30, 2024, Azure Active Directory Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their usersΓÇÖ authentication data to the cloud-based Azure Active Directory Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure Active Directory Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md).
---
-### Public Preview - Lifecycle Workflows is now available
---
-**Type:** New feature
-**Service category:** Lifecycle Workflows
-**Product capability:** Identity Governance
--
-We're excited to announce the public preview of Lifecycle Workflows, a new Identity Governance capability that allows customers to extend the user provisioning process, and adds enterprise grade user lifecycle management capabilities, in Azure AD to modernize your identity lifecycle management process. With Lifecycle Workflows, you can:
--- Confidently configure and deploy custom workflows to onboard and offboard cloud employees at scale replacing your manual processes.-- Automate out-of-the-box actions critical to required Joiner and Leaver scenarios and get rich reporting insights.-- Extend workflows via Logic Apps integrations with custom tasks extensions for more complex scenarios.-
-For more information, see: [What are Lifecycle Workflows? (Public Preview)](../governance/what-are-lifecycle-workflows.md).
---
-### Public Preview - User-to-Group Affiliation recommendation for group Access Reviews
---
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and applies the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
---
-### General Availability - Group assignment for SuccessFactors Writeback application
---
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Outbound to SaaS Applications
-
-When configuring writeback of attributes from Azure AD to SAP SuccessFactors Employee Central, you can now specify the scope of users using Azure AD group assignment. For more information, see: [Tutorial: Configure attribute write-back from Azure AD to SAP SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md).
---
-### General Availability - Number Matching for Microsoft Authenticator notifications
---
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
-
-The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting February 27 2023.
--
-For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
---
-### General Availability - Additional context in Microsoft Authenticator notifications
---
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following steps:
--- Application Context: This feature shows users which application they're signing into.-- Geographic Location Context: This feature shows users their sign-in location based on the IP address of the device they're signing into. -
-The feature is available for both MFA and Password-less Phone Sign-in notifications and greatly increases the security posture of the Microsoft Authenticator app. We've also refreshed the Azure portal Admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update, we've also added the highly requested ability for admins to exclude user groups from certain features.
-
-We highly encourage our customers to adopt these critical security features to reduce accidental approvals of Authenticator notifications by end users.
--
-For more information, see: [How to use additional context in Microsoft Authenticator notifications - Authentication methods policy](../authentication/how-to-mfa-additional-context.md).
---
-### New Federated Apps available in Azure AD Application gallery - October 2022
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
---
-In October 2022 we've added the following 15 new applications in our App gallery with Federation support:
-
-[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
-
-For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
-----
-### Public preview - New provisioning connectors in the Azure AD Application Gallery - October 2022
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [LawVu](../saas-apps/lawvu-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
------
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
For more information on expressions, see [Reference for writing expressions for
The expression examples above use endDate for SAP and StatusHireDate for Workday. However, you may opt to use different attributes.
-For example, you might use StatusContinuesFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be:
+For example, you might use StatusContinuousFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be:
- `FormatDateTime([StatusContinuesFirstDayOfWork], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")`
+ `FormatDateTime([StatusContinuousFirstDayOfWork], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")`
The following table has a list of suggested attributes and their scenario recommendations.
active-directory Protected Actions Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-add.md
Previously updated : 04/10/2022 Last updated : 04/21/2023 # Add, test, or remove protected actions in Azure AD (preview)
Protected actions use a Conditional Access authentication context, so you must c
1. Create a new policy and select your authentication context.
- For more information, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md).
+ For more information, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context).
:::image type="content" source="media/protected-actions-add/policy-authentication-context.png" alt-text="Screenshot of New policy page to create a new policy with an authentication context." lightbox="media/protected-actions-add/policy-authentication-context.png":::
Protected actions use a Conditional Access authentication context, so you must c
To add protection actions, assign a Conditional Access policy to one or more permissions using a Conditional Access authentication context.
+1. Select **Azure Active Directory** > **Protect & secure** > **Conditional Access** > **Policies**.
+
+1. Make sure the state of the Conditional Access policy that you plan to use with your protected action is set to **On** and not **Off** or **Report-only**.
+ 1. Select **Azure Active Directory** > **Roles & admins** > **Protected actions (Preview)**. :::image type="content" source="media/protected-actions-add/protected-actions-start.png" alt-text="Screenshot of Add protected actions page in Roles and administrators." lightbox="media/protected-actions-add/protected-actions-start.png":::
The user has previously satisfied policy. For example, the completed multifactor
Check the [Azure AD sign-in events](../conditional-access/troubleshoot-conditional-access.md) to troubleshoot. The sign-in events will include details about the session, including if the user has already completed multifactor authentication. When troubleshooting with the sign-in logs, it's also helpful to check the policy details page, to confirm an authentication context was requested.
+### Symptom - Policy is never satisfied
+
+When you attempt to perform the requirements for the Conditional Access policy, the policy is never satisfied and you keep getting requested to reauthenticate.
+
+**Cause**
+
+The Conditional Access policy wasn't created or the policy state is **Off** or **Report-only**.
+
+**Solution**
+
+Create the Conditional Access policy if it doesn't exist or and set the state to **On**.
+
+If you aren't able to access the Conditional Access page because of the protected action and repeated requests to reauthenticate, use the following link to open the Conditional Access page.
+
+- [https://aka.ms/MSALProtectedActions](https://aka.ms/MSALProtectedActions)
+ ### Symptom - No access to add protected actions When signed in you don't have permissions to add or remove protected actions.
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
Here's the initial set of permissions:
## How do protected actions compare with Privileged Identity Management role activation?
-[Privileged Identity Management role activation](../privileged-identity-management/pim-how-to-change-default-settings.md) can also be assigned Conditional Access policies. This capability allows for policy enforcement only when a user activates a role, providing the most comprehensive protection. Protected actions are enforced only when a user takes an action that requires permissions with Conditional Access policy assigned to it. Protected actions allows for high impact permissions to be protected, independent of a user role. Privileged Identity Management role activation and protected actions can be used together, for the strongest coverage.
+[Privileged Identity Management role activation](../privileged-identity-management/pim-how-to-change-default-settings.md) can also be assigned Conditional Access policies. This capability allows for policy enforcement only when a user activates a role, providing the most comprehensive protection. Protected actions are enforced only when a user takes an action that requires permissions with Conditional Access policy assigned to it. Protected actions allow for high impact permissions to be protected, independent of a user role. Privileged Identity Management role activation and protected actions can be used together for stronger coverage.
## Steps to use protected actions
Here's the initial set of permissions:
1. **Configure Conditional Access policy**
- Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context)
+ Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](./protected-actions-add.md#configure-conditional-access-policy)
1. **Add protected actions**
active-directory Contentkalender Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentkalender-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/21/2023 # Tutorial: Azure AD SSO integration with Contentkalender
-In this tutorial, you'll learn how to integrate Contentkalender with Azure Active Directory (Azure AD). When you integrate Contentkalender with Azure AD, you can:
+In this tutorial, you learn how to integrate Contentkalender with Azure Active Directory (Azure AD). When you integrate Contentkalender with Azure AD, you can:
* Control in Azure AD who has access to Contentkalender. * Enable your users to be automatically signed-in to Contentkalender with their Azure AD accounts.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** text box, type one of the following URLs:
-
- | **Identifier** |
- ||
- | `https://login.contentkalender.nl` |
- | `https://contentkalender-acc.bettywebblocks.com/` (only for testing purposes)|
-
- b. In the **Reply URL** text box, type one of the following URLs:
-
- | **Reply URL** |
- |--|
- | `https://login.contentkalender.nl/sso/saml/callback` |
- | `https://contentkalender-acc.bettywebblocks.com/sso/saml/callback` (only for testing purposes)|
+ a. In the **Identifier** text box, type the URL:
+ `https://login.contentkalender.nl`
+ b. In the **Reply URL** text box, type the URL:
+ `https://login.contentkalender.nl/sso/saml/callback`
+
c. In the **Sign-on URL** text box, type the URL: `https://login.contentkalender.nl/v2/login`
active-directory Fcm Hub Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fcm-hub-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with FCM HUB'
+ Title: 'Tutorial: Azure Active Directory SSO integration with FCM HUB'
description: Learn how to configure single sign-on between Azure Active Directory and FCM HUB.
Previously updated : 11/21/2022 Last updated : 04/19/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with FCM HUB
+# Tutorial: Azure Active Directory SSO integration with FCM HUB
-In this tutorial, you'll learn how to integrate FCM HUB with Azure Active Directory (Azure AD). When you integrate FCM HUB with Azure AD, you can:
+In this tutorial, you learn how to integrate FCM HUB with Azure Active Directory (Azure AD). When you integrate FCM HUB with Azure AD, you can:
* Control in Azure AD who has access to FCM HUB. * Enable your users to be automatically signed-in to FCM HUB with their Azure AD accounts.
Follow these steps to enable Azure AD SSO in the Azure portal.
- **Source Attribute**: PortalID, value provided by FCM 1. In the **SAML Signing Certificate** section, use the edit option to select or enter the following settings, and then select **Save**:
- - **Signing Option**: Sign SAML response
+ - **Signing Option**: Sign SAML response and Assertion
- **Signing Algorithm**: SHA-256 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
active-directory Hashicorp Cloud Platform Hcp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hashicorp-cloud-platform-hcp-tutorial.md
Previously updated : 04/06/2023 Last updated : 04/19/2023 # Azure Active Directory SSO integration with HashiCorp Cloud Platform (HCP)
-In this article, you learn how to integrate HashiCorp Cloud Platform (HCP) with Azure Active Directory (Azure AD). HashiCorp Cloud platform hosting managed services of the developer tools created by HashiCorp, such Terraform, Vault, Boundary, and Consul. When you integrate HashiCorp Cloud Platform (HCP) with Azure AD, you can:
+In this article, you learn how to integrate HashiCorp Cloud Platform (HCP) with Azure Active Directory (Azure AD). HashiCorp Cloud Platform hosting managed services of the developer tools created by HashiCorp, such Terraform, Vault, Boundary, and Consul. When you integrate HashiCorp Cloud Platform (HCP) with Azure AD, you can:
* Control in Azure AD who has access to HashiCorp Cloud Platform (HCP). * Enable your users to be automatically signed-in to HashiCorp Cloud Platform (HCP) with their Azure AD accounts.
To integrate Azure Active Directory with HashiCorp Cloud Platform (HCP), you nee
* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* HashiCorp Cloud Platform (HCP) single sign-on (SSO) enabled subscription.
+* HashiCorp Cloud Platform (HCP) single sign-on (SSO) enabled organization.
## Add application and assign a test user
Complete the following steps to enable Azure AD single sign-on in the Azure port
`https://portal.cloud.hashicorp.com/sign-in?conn-id=HCP-SSO-<HCP_ORG_ID>-samlp` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. These values are also pregenerated for you on the "Setup SAML SSO" page within your Organization settings in HashiCorp Cloud Platform (HCP). For more information SAML documentation is provided on [HashiCorp's Developer site](https://developer.hashicorp.com/hcp/docs/hcp/security/sso/sso-aad). Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) for any questions about this process. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure HashiCorp Cloud Platform (HCP) SSO
-To configure single sign-on on **HashiCorp Cloud Platform (HCP)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [HashiCorp Cloud Platform (HCP) support team](mailto:support@hashicorp.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create HashiCorp Cloud Platform (HCP) test user
-
-In this section, you create a user called Britta Simon at HashiCorp Cloud Platform (HCP). Work with [HashiCorp Cloud Platform (HCP) support team](mailto:support@hashicorp.com) to add the users in the HashiCorp Cloud Platform (HCP) platform. Users must be created and activated before you use single sign-on.
+To configure single sign-on on the **HashiCorp Cloud Platform (HCP)** side, you need to add a verification record TXT to your domain host, add the downloaded **Certificate (Base64)** and **Login URL** copied from Azure portal to your HashiCorp Cloud Platform (HCP) Organization "Setup SAML SSO" page. Please refer to the SAML documentation that is provided on [HashiCorp's Developer site](https://developer.hashicorp.com/hcp/docs/hcp/security/sso/sso-aad). Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) for any questions about this process.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to HashiCorp Cloud Platform (HCP) Sign-on URL where you can initiate the login flow.
-
-* Go to HashiCorp Cloud Platform (HCP) Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you select the HashiCorp Cloud Platform (HCP) tile in the My Apps, this will redirect to HashiCorp Cloud Platform (HCP) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+In the previous [Create and assign Azure AD test user](#create-and-assign-azure-ad-test-user) section, you created a user called B.Simon and assigned it to the HashiCorp Cloud Platform (HCP) app within the Azure Portal. This can now be used for testing the SSO connection. You may also use any account that is already associated with the HashiCorp Cloud Platform (HCP) app in the Azure Portal.
## Additional resources * [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) * [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+* [HashiCorp Cloud Platform (HCP) | Azure Active Directory SAML SSO Configuration](https://developer.hashicorp.com/hcp/docs/hcp/security/sso/sso-aad).
## Next steps
active-directory Hornbill Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hornbill-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/19/2023 # Tutorial: Azure AD SSO integration with Hornbill
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, log in to Hornbill as a Security Administrator.
-2. On the Home page, click **System**.
+2. On the Home page, click the **Configuration** settings icon at the bottom left of the page.
- ![Screenshot shows the Hornbill system.](./media/hornbill-tutorial/system.png "Hornbill system")
+ ![Screenshot shows the Hornbill system.](./media/hornbill-tutorial/settings.png "Hornbill system")
-3. Navigate to **Security**.
+3. Navigate to **Platform Configuration**.
- ![Screenshot shows the Hornbill security.](./media/hornbill-tutorial/security.png "Hornbill security")
+ ![Screenshot shows the Hornbill platform configuration.](./media/hornbill-tutorial/platform-configuration.png "Hornbill security")
-4. Click **SSO Profiles**.
+4. Click **SSO Profiles** under Security.
- ![Screenshot shows the Hornbill single.](./media/hornbill-tutorial/profile.png "Hornbill single")
+ ![Screenshot shows the Hornbill single.](./media/hornbill-tutorial/profiles.png "Hornbill single")
-5. On the right side of the page, click on **Add logo**.
+5. On the right side of the page, click on **+ Create New Profile**.
- ![Screenshot shows to add the logo.](./media/hornbill-tutorial/add-logo.png "Hornbill add")
+ ![Screenshot shows to add the logo.](./media/hornbill-tutorial/create-new-profile.png "Hornbill create")
-6. On the **Profile Details** bar, click on **Import SAML Meta logo**.
+6. On the **Profile Details** bar, click on the **Import IDP Meta Data** button.
- ![Screenshot shows Hornbill Meta logo.](./media/hornbill-tutorial/logo.png "Hornbill logo")
+ ![Screenshot shows Hornbill Meta logo.](./media/hornbill-tutorial/import-metadata.png "Hornbill logo")
-7. On the Pop-up page in the **URL** text box, paste the **App Federation Metadata Url**, which you have copied from Azure portal and click **Process**.
+7. On the pop-up, in the **URL** text box, paste the **App Federation Metadata Url**, which you have copied from Azure portal and click **Process**.
- ![Screenshot shows Hornbill process.](./media/hornbill-tutorial/process.png "Hornbill process")
+ ![Screenshot shows Hornbill process.](./media/hornbill-tutorial/metadata-url.png "Hornbill process")
8. After clicking process the values get auto populated automatically under **Profile Details** section.
- ![Screenshot shows Hornbill profile](./media/hornbill-tutorial/page.png "Hornbill profile")
-
- ![Screenshot shows Hornbill details.](./media/hornbill-tutorial/services.png "Hornbill details")
-
- ![Screenshot shows Hornbill certificate.](./media/hornbill-tutorial/details.png "Hornbill certificate")
+ ![Screenshot shows Hornbill profile](./media/hornbill-tutorial/profile-details.png "Hornbill profile")
9. Click **Save Changes**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
In this section, a user called Britta Simon is created in Hornbill. Hornbill supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Hornbill, a new one is created after authentication. > [!Note]
-> If you need to create a user manually, contact [Hornbill Client support team](https://www.hornbill.com/support/?request/).
+> If you need to create a user manually, contact [Hornbill Client support team](https://www.hornbill.com/support/?request/).
## Test SSO
active-directory Hubspot Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hubspot-tutorial.md
To provision a user account in HubSpot:
![The Create user option in HubSpot](./media/hubspot-tutorial/teams.png)
-1. In the **Add email addess(es)** box, enter the email address of the user in the format brittasimon\@contoso.com, and then select **Next**.
+1. In the **Add email address(es)** box, enter the email address of the user in the format brittasimon\@contoso.com, and then select **Next**.
![The Add email address(es) box in the Create users section in HubSpot](./media/hubspot-tutorial/add-user.png)
active-directory Predict360 Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predict360-sso-tutorial.md
Previously updated : 04/06/2023 Last updated : 04/20/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
- d. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+ d. Enter the customer code/key provided by 360factors in **Relay State** textbox. Make sure the code is entered in lowercase. This is required for **IDP** initiated mode.
+
+ > [!Note]
+ > You will get the **Service Provider metadata file** from the [Predict360 SSO support team](mailto:support@360factors.com). If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+
+ e. If you wish to configure the application in **SP** initiated mode, then perform the following step:
- In the **Sign on URL** textbox, type the URL:
- `https://paadt.360factors.com/predict360/login.do`.
+ In the **Sign on URL** textbox, type your customer specific URL using the following pattern:
+ `https://<customer-key>.360factors.com/predict360/login.do`
> [!Note]
- > You will get the **Service Provider metadata file** from the [Predict360 SSO support team](mailto:support@360factors.com). If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+ > This URL is shared by 360factors team. `<customer-key>` is replaced with your customer key, which is also provide by 360factors team.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+1. Find **Certificate (Raw)** in the **SAML Signing Certificate** section, and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate Raw download link.](common/certificateraw.png " Raw Certificate")
+ 1. On the **Set up Predict360 SSO** section, copy the appropriate URL(s) based on your requirement. ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
-## Configure Predict360 SSO SSO
+## Configure Predict360 SSO
-To configure single sign-on on **Predict360 SSO** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Predict360 SSO support team](mailto:support@360factors.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Predict360 SSO** side, you need to send the downloaded **Federation Metadata XML**, **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Predict360 SSO support team](mailto:support@360factors.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Predict360 SSO test user
active-directory Workday Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/18/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing Edit Basic SAML Configuration.](common/edit-urls.png)
1. On the **Basic SAML Configuration** page, enter the values for the following fields:
Follow these steps to enable Azure AD SSO in the Azure portal.
> These values are not the real. Update these values with the actual Sign-on URL, Reply URL and Logout URL. Your reply URL must have a subdomain for example: www, wd2, wd3, wd3-impl, wd5, wd5-impl). > Using something like `http://www.myworkday.com` works but `http://myworkday.com` does not. Contact [Workday Client support team](https://www.workday.com/en-us/partners-services/services/support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Your Workday application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Workday application expects **nameidentifier** to be mapped with **user.mail**, **UPN**, etc., so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+1. Your Workday application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, whereas **nameidentifier** is mapped with **user.userprincipalname**. Workday application expects **nameidentifier** to be mapped with **user.mail**, **UPN**, etc., so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot showing The Certificate download link.](common/metadataxml.png)
1. To modify the **Signing** options as per your requirement, click **Edit** button to open **SAML Signing Certificate** dialog.
- ![Certificate](common/edit-certificate.png)
-
- ![SAML Signing Certificate](./media/workday-tutorial/signing-option.png)
+ ![Screenshot showing Certificate.](common/edit-certificate.png)
a. Select **Sign SAML response and assertion** for **Signing Option**.
+ ![Screenshot showing SAML Signing Certificate.](./media/workday-tutorial/signing-option.png)
+ b. Click **Save** 1. On the **Set up Workday** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot showing Copy configuration URLs.](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been setup for this app, you see "Default Access" role selected.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Workday
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Search box**, search with the name **Edit Tenant Setup ΓÇô Security** on the top left side of the home page.
- ![Edit Tenant Security](./media/workday-tutorial/search-box.png "Edit Tenant Security")
+ ![Screenshot showing Edit Tenant Security.](./media/workday-tutorial/search-box.png "Edit Tenant Security")
1. In the **SAML Setup** section, click on **Import Identity Provider**.
- ![SAML Setup](./media/workday-tutorial/saml-setup.png "SAML Setup")
+ ![Screenshot showing SAML Setup.](./media/workday-tutorial/saml-setup.png "SAML Setup")
1. In **Import Identity Provider** section, perform the below steps:
- ![Importing Identity Provider](./media/workday-tutorial/import-identity-provider.png)
+ ![Screenshot showing Importing Identity Provider.](./media/workday-tutorial/import-identity-provider.png)
a. Give the **Identity Provider Name** like `AzureAD` in the textbox.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Click on **Select files** to upload the downloaded **Federation Metadata XML** file.
- d. Click on **OK** and then **Done**.
+ d. Click on **OK**.
-1. After clicking **Done**, a new row will be added in the **SAML Identity Providers** and then you can add the below steps for the newly created row.
+1. After clicking **OK**, a new row will be added in the **SAML Identity Providers** and then you can add the below steps for the newly created row.
- ![SAML Identity Providers.](./media/workday-tutorial/saml-identity-providers.png "SAML Identity Providers")
+ ![Screenshot showing SAML Identity Providers.](./media/workday-tutorial/saml-identity-providers.png "SAML Identity Providers")
a. Click on **Enable IDP Initiated Logout** checkbox.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the **Service Provider ID** textbox, type **http://www.workday.com**.
- g Select **Do Not Deflate SP-initiated Authentication Request**.
-
-1. Perform the following steps in the below image.
-
- ![Workday](./media/workday-tutorial/service-provider.png "SAML Identity Providers")
-
- a. In the **Service Provider ID (Will be Deprecated)** textbox, type **http://www.workday.com**.
-
- b. In the **IDP SSO Service URL (Will be Deprecated)** textbox, type **Login URL** value.
-
- c. Select **Do Not Deflate SP-initiated Authentication Request (Will be Deprecated)**.
+ g. Select **Do Not Deflate SP-initiated Authentication Request**.
- d. For **Authentication Request Signature Method**, select **SHA256**.
+ h. Click **Ok**.
- e. Click **OK**.
+ i. If the task was completed successfully, click **Done**.
> [!NOTE] > Please ensure you set up single sign-on correctly. In case you enable single sign-on with incorrect setup, you may not be able to enter the application with your credentials and get locked out. In this situation, Workday provides a backup log-in URL where users can sign-in using their normal username and password in the following format:[Your Workday URL]/login.flex?redirect=n
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Directory** page, select **Find Workers** in view tab.
- ![Find workers](./media/workday-tutorial/user-directory.png)
+ ![Screenshot showing Find workers.](./media/workday-tutorial/user-directory.png)
1. In the **Find Workers** page, select the user from the results. 1. In the following page,select **Job > Worker Security** and the **Workday account** has to match with the Azure active directory as the **Name ID** value.
- ![Worker Security](./media/workday-tutorial/worker-security.png)
+ ![Screenshot showing Worker Security.](./media/workday-tutorial/worker-security.png)
> [!NOTE] > For more information on how to create a workday test user, please contact [Workday Client support team](https://www.workday.com/en-us/partners-services/services/support.html).
active-directory Linkedin Employment Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/linkedin-employment-verification.md
+
+ Title: LinkedIn employment verification
+description: A design pattern describing how to configure employment verification using LinkedIn
++++++ Last updated : 04/21/2023+++
+# LinkedIn employment verification
+
+If your organization wants its employees get verified on LinkedIn, you need to follow these few steps:
+
+1. Setup your Microsoft Entra Verified ID service by following these [instructions](verifiable-credentials-configure-tenant.md).
+1. [Create](how-to-use-quickstart-verifiedemployee.md#create-a-verified-employee-credential) a Verified ID Employee credential.
+1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp.
+1. Once you deploy the updated LinkedIn mobile app your employees can get verified.
+
+>[!NOTE]
+> Review LinkedIn's documentation for information on [verifications on LinkedIn profiles.](https://www.linkedin.com/help/linkedin/answer/a1359065).
+
+## Deploying custom Webapp
+
+Deploying this custom webapp from [GitHub](https://github.com/Azure-Samples/VerifiedEmployeeIssuance) allows an administrator to have control over who can get verified and change which information is shared with LinkedIn.
+There are two reasons to deploy the custom webapp for LinkedIn Employment verification.
+
+1. You need control over who can get verified on LinkedIn. The webapp allows you to use user assignments to grant access.
+1. You want more control over the issuance of the Verified Employee ID. By default, the Employee Verified ID contains a few claims:
+
+ - ```firstname```
+ - ```lastname```
+ - ```displayname```
+ - ```jobtitle```
+ - ```upn```
+ - ```email```
+ - ```photo```
+
+>[!NOTE]
+>The web app can be modified to remove claims, for example, you may choose to remove the photo claim.
+
+Installation instructions for the Webapp can be found in the [GitHub repository](https://github.com/Azure-Samples/VerifiedEmployeeIssuance/blob/main/ReadmeFiles/Deployment.md)
+
+## Architecture overview
+
+Once the administrator configures the company page on LinkedIn, employees can get verified. Below are the high-level steps for LinkedIn integration:
+
+1. User starts the LinkedIn mobile app.
+1. The mobile app retrieves information from the LinkedIn backend and checks if the company is enabled for employment verification and it retrieves a URL to the custom Webapp.
+1. If the company is enabled, the user can tap on the verify employment link, and the user is sent to the Webapp in a web view.
+1. The user needs to provide their corporate credentials to sign in.
+1. The Webapp retrieves the user profile from Microsoft Graph including, ```firstname```, ```lastname```, ```displayname```, ```jobtitle```, ```upn```, ```email``` and ```photo``` and call the Microsoft Entra Verified ID service with the profile information.
+1. The Microsoft Entra Verified ID service creates a verifiable credentials issuance request and returns the URL of that specific request.
+1. The Webapp redirects back to the LinkedIn app with this specific URL.
+1. LinkedIn app wallet communicates with the Microsoft Entra Verified ID services to get the Verified Employment VC issued in their wallet, which is part of the LinkedIn mobile app.
+1. The LinkedIn app then verifies the received verifiable credential.
+1. If the verification is completed, they change the status to ΓÇÿverifiedΓÇÖ in their backend system and is visible to other users of LinkedIn.
+
+The diagram below shows the dataflow of the entire solution.
+
+ ![Diagram showing a high-level flow.](media/linkedin-employment-verification/linkedin-employee-verification.png)
++
+## Frequently asked questions
+
+### Can I use Microsoft Authenticator to store my Employee Verified ID and use it to get verified on LinkedIn?
+
+Currently the solution works through the embedded webview. In the future LinkedIn allows us to use Microsoft authenticator or any compatible custom wallet to verify employment. The myaccount page will also be updated to allow issuance of the verified employee ID to Microsoft Authenticator.
+
+### How do users sign-in?
+
+The Webapp is protected using Microsoft Entra Azure Active directory. Users sign-in according to the administrator's policy, either with passwordless, regular username and password, with or without MFA, etc. This is proof a user is allowed to get issued a verified employee ID.
+
+### What happens when an employee leaves the organization?
+
+Nothing by default. You can choose the revoke the Verified Employee ID but currently LinkedIn isn't checking for that status.
+
+### What happens when my Verified Employee ID expires?
+
+LinkedIn asks you again to get verified, if you donΓÇÖt, the verified checkmark is removed from your profile.
+
+### Can former employees use this feature to get verified?
+
+Currently this option only verifies current employment.
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Learn what's new in the service. These items may be release notes, videos, blog
Customers can now improve the relevance of recommendations to make them more actionable, resulting in additional cost savings. The right sizing recommendations help optimize costs by identifying idle or underutilized virtual machines based on their CPU, memory, and network activity over the default lookback period of seven days.
-Now, with this latest update, customers can adjust the default look back period to get recommendations based on 14, 21,30, 60, or even 90 days of use. The configuration can be applied at the subscription level. This is especially useful when the workloads have biweekly or monthly peaks (such as with payroll applications).
+Now, with this latest update, customers can adjust the default look back period to get recommendations based on 14, 21, 30, 60, or even 90 days of use. The configuration can be applied at the subscription level. This is especially useful when the workloads have biweekly or monthly peaks (such as with payroll applications).
To learn more, visit [Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances).
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/12/2023 Last updated : 04/19/2023 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to in-tree driver features, Azure Disk CSI driver supports the follo
> [!NOTE] > Depending on the VM SKU that's being used, the Azure Disk CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
-## Storage class driver dynamic disks parameters
-
-|Name | Meaning | Available Value | Mandatory | Default value
-| | | | |
-|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS`, `PremiumV2_LRS` (`PremiumV2_LRS` only supports `None` caching mode) | No | `StandardSSD_LRS`|
-|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|
-|cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
-|location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
-|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster|
-|DiskIOPSReadWrite | [UltraSSD disk](../virtual-machines/linux/disks-ultra-ssd.md) IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`|
-|DiskMBpsReadWrite | [UltraSSD disk](../virtual-machines/linux/disks-ultra-ssd.md) Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
-|LogicalSectorSize | Logical sector size in bytes for Ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`|
-|tags | Azure Disk [tags](../azure-resource-manager/management/tag-resources.md) | Tag format: `key1=val1,key2=val2` | No | ""|
-|diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest](../virtual-machines/windows/disk-encryption.md) | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
-|diskEncryptionType | Encryption type of the disk encryption set. | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
-|writeAcceleratorEnabled | [Write Accelerator on Azure Disks](../virtual-machines/windows/how-to-enable-write-accelerator.md) | `true`, `false` | No | ""|
-|networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
-|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
-|enableBursting | [Enable on-demand bursting](../virtual-machines/disk-bursting.md) beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`|
-|useragent | User agent used for [customer usage attribution](../marketplace/azure-partner-customer-usage-attribution.md)| | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`|
-|enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
-|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
- ## Use CSI persistent volumes with Azure Disks A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disk for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure Disks](azure-csi-disk-storage-provision.md#statically-provision-a-volume).
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/11/2023 Last updated : 04/19/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to the original in-tree driver features, Azure File CSI driver suppo
- [Private endpoint][private-endpoint-overview] - Creating large mount of file shares in parallel.
-## Storage class driver dynamic parameters
-
-|Name | Meaning | Available Value | Mandatory | Default value
-| | | | |
-|skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GiB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
-|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
-|resourceGroup | Specify the resource group where the Azure Disks will be created. | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
-|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
-|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
-|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. |
-|shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.|
-|server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
-|disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` |
-|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` |
-|requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` |
-|networkEndpointType | Specify network endpoint type for the storage account created by driver. If `privateEndpoint` is specified, a private endpoint will be created for the storage account. For other cases, a service endpoint will be created by default. | "",`privateEndpoint`| No | "" |
-|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. |
-|tags | [tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
-|matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` |
-| | **Following parameters are only for SMB protocol** | | |
-|subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
-|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
-|secretName | Specify secret name to store account key. | | No |
-|secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` |
-|useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
-| | **Following parameters are only for NFS protocol** | | |
-|rootSquashType | Specify root squashing behavior on the share. The default is `NoRootSquash` | `AllSquash`, `NoRootSquash`, `RootSquash` | No |
-|mountPermissions | Mounted folder permissions. The default is `0777`. If set to `0`, driver doesn't perform `chmod` after mount | `0777` | No |
-| | **Following parameters are only for vnet setting, e.g. NFS, private endpoint** | | |
-|vnetResourceGroup | Specify Vnet resource group where virtual network is defined. | Existing resource group name. | No | If empty, driver uses the `vnetResourceGroup` value in Azure cloud config file. |
-|vnetName | Virtual network name | Existing virtual network name. | No | If empty, driver uses the `vnetName` value in Azure cloud config file. |
-|subnetName | Subnet name | Existing subnet name of the agent node. | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
-|fsGroupChangePolicy | Indicates how volume's ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch`|
- ## Use a persistent volume with Azure Files A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][statically-provision-a-volume].
provisioner: file.csi.azure.com
allowVolumeExpansion: true parameters: protocol: nfs
+mountOptions:
+ - nconnect=4
``` After editing and saving the file, create the storage class with the [kubectl apply][kubectl-apply] command:
aks Cis Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-ubuntu.md
Title: Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark description: Learn how AKS applies the CIS benchmark Previously updated : 04/20/2022 Last updated : 04/19/2023++ # Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark
-As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security OS configuration applied to Ubuntu imaged used by AKS. This security configuration is based on the Azure Linux security baseline which aligns with CIS benchmark. For more information about AKS security, see Security concepts for applications and clusters in Azure Kubernetes Service (AKS). For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md). For more information on the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks]. For more information on the Azure security baselines for Linux, see [Linux security baseline][linux-security-baseline].
+As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security OS configuration applied to Ubuntu imaged used by AKS. This security configuration is based on the Azure Linux security baseline, which aligns with CIS benchmark. For more information about AKS security, see Security concepts for applications and clusters in Azure Kubernetes Service (AKS). For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md). For more information on the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks]. For more information on the Azure security baselines for Linux, see [Linux security baseline][linux-security-baseline].
## Ubuntu LTS 18.04
The following are the results from the [CIS Ubuntu 18.04 LTS Benchmark v2.1.0][c
Recommendations can have one of the following reasons:
-* *Potential Operation Impact* - Recommendation was not applied because it would have a negative effect on the service.
+* *Potential Operation Impact* - Recommendation wasn't applied because it would have a negative effect on the service.
* *Covered Elsewhere* - Recommendation is covered by another control in Azure cloud compute. The following are CIS rules implemented:
The following are CIS rules implemented:
| 1.3.1 | Ensure AIDE is installed | Fail | Covered Elsewhere | | 1.3.2 | Ensure filesystem integrity is regularly checked | Fail | Covered Elsewhere | | 1.4 | Secure Boot Settings |||
-| 1.4.1 | Ensure permissions on bootloader config are not overridden | Fail | |
+| 1.4.1 | Ensure permissions on bootloader config aren't overridden | Fail | |
| 1.4.2 | Ensure bootloader password is set | Fail | Not Applicable| | 1.4.3 | Ensure permissions on bootloader config are configured | Fail | | | 1.4.4 | Ensure authentication required for single user mode | Fail | Not Applicable |
The following are CIS rules implemented:
| 1.8 | GNOME Display Manager ||| | 1.8.2 | Ensure GDM login banner is configured | Pass || | 1.8.3 | Ensure disable-user-list is enabled | Pass ||
-| 1.8.4 | Ensure XDCMP is not enabled | Pass ||
+| 1.8.4 | Ensure XDCMP isn't enabled | Pass ||
| 1.9 | Ensure updates, patches, and additional security software are installed | Pass || | 2 | Services ||| | 2.1 | Special Purpose Services |||
The following are CIS rules implemented:
| 2.1.1.2 | Ensure systemd-timesyncd is configured | Not Applicable | AKS uses ntpd for timesync | | 2.1.1.3 | Ensure chrony is configured | Fail | Covered Elsewhere | | 2.1.1.4 | Ensure ntp is configured | Pass ||
-| 2.1.2 | Ensure X Window System is not installed | Pass ||
-| 2.1.3 | Ensure Avahi Server is not installed | Pass ||
-| 2.1.4 | Ensure CUPS is not installed | Pass ||
-| 2.1.5 | Ensure DHCP Server is not installed | Pass ||
-| 2.1.6 | Ensure LDAP server is not installed | Pass ||
-| 2.1.7 | Ensure NFS is not installed | Pass ||
-| 2.1.8 | Ensure DNS Server is not installed | Pass ||
-| 2.1.9 | Ensure FTP Server is not installed | Pass ||
-| 2.1.10 | Ensure HTTP server is not installed | Pass ||
-| 2.1.11 | Ensure IMAP and POP3 server are not installed | Pass ||
-| 2.1.12 | Ensure Samba is not installed | Pass ||
-| 2.1.13 | Ensure HTTP Proxy Server is not installed | Pass ||
-| 2.1.14 | Ensure SNMP Server is not installed | Pass ||
+| 2.1.2 | Ensure X Window System isn't installed | Pass ||
+| 2.1.3 | Ensure Avahi Server isn't installed | Pass ||
+| 2.1.4 | Ensure CUPS isn't installed | Pass ||
+| 2.1.5 | Ensure DHCP Server isn't installed | Pass ||
+| 2.1.6 | Ensure LDAP server isn't installed | Pass ||
+| 2.1.7 | Ensure NFS isn't installed | Pass ||
+| 2.1.8 | Ensure DNS Server isn't installed | Pass ||
+| 2.1.9 | Ensure FTP Server isn't installed | Pass ||
+| 2.1.10 | Ensure HTTP server isn't installed | Pass ||
+| 2.1.11 | Ensure IMAP and POP3 server aren't installed | Pass ||
+| 2.1.12 | Ensure Samba isn't installed | Pass ||
+| 2.1.13 | Ensure HTTP Proxy Server isn't installed | Pass ||
+| 2.1.14 | Ensure SNMP Server isn't installed | Pass ||
| 2.1.15 | Ensure mail transfer agent is configured for local-only mode | Pass ||
-| 2.1.16 | Ensure rsync service is not installed | Fail | |
-| 2.1.17 | Ensure NIS Server is not installed | Pass ||
+| 2.1.16 | Ensure rsync service isn't installed | Fail | |
+| 2.1.17 | Ensure NIS Server isn't installed | Pass ||
| 2.2 | Service Clients |||
-| 2.2.1 | Ensure NIS Client is not installed | Pass ||
-| 2.2.2 | Ensure rsh client is not installed | Pass ||
-| 2.2.3 | Ensure talk client is not installed | Pass ||
-| 2.2.4 | Ensure telnet client is not installed | Fail | |
-| 2.2.5 | Ensure LDAP client is not installed | Pass ||
-| 2.2.6 | Ensure RPC is not installed | Fail | Potential Operational Impact |
+| 2.2.1 | Ensure NIS Client isn't installed | Pass ||
+| 2.2.2 | Ensure rsh client isn't installed | Pass ||
+| 2.2.3 | Ensure talk client isn't installed | Pass ||
+| 2.2.4 | Ensure telnet client isn't installed | Fail | |
+| 2.2.5 | Ensure LDAP client isn't installed | Pass ||
+| 2.2.6 | Ensure RPC isn't installed | Fail | Potential Operational Impact |
| 2.3 | Ensure nonessential services are removed or masked | Pass | | | 3 | Network Configuration ||| | 3.1 | Disable unused network protocols and devices |||
The following are CIS rules implemented:
| 3.2.1 | Ensure packet redirect sending is disabled | Pass || | 3.2.2 | Ensure IP forwarding is disabled | Fail | Not Applicable | | 3.3 | Network Parameters (Host and Router) |||
-| 3.3.1 | Ensure source routed packets are not accepted | Pass ||
-| 3.3.2 | Ensure ICMP redirects are not accepted | Pass ||
-| 3.3.3 | Ensure secure ICMP redirects are not accepted | Pass ||
+| 3.3.1 | Ensure source routed packets aren't accepted | Pass ||
+| 3.3.2 | Ensure ICMP redirects aren't accepted | Pass ||
+| 3.3.3 | Ensure secure ICMP redirects aren't accepted | Pass ||
| 3.3.4 | Ensure suspicious packets are logged | Pass || | 3.3.5 | Ensure broadcast ICMP requests are ignored | Pass || | 3.3.6 | Ensure bogus ICMP responses are ignored | Pass || | 3.3.7 | Ensure Reverse Path Filtering is enabled | Pass || | 3.3.8 | Ensure TCP SYN Cookies is enabled | Pass ||
-| 3.3.9 | Ensure IPv6 router advertisements are not accepted | Pass ||
+| 3.3.9 | Ensure IPv6 router advertisements aren't accepted | Pass ||
| 3.4 | Uncommon Network Protocols ||| | 3.5 | Firewall Configuration ||| | 3.5.1 | Configure UncomplicatedFirewall |||
The following are CIS rules implemented:
| 6.1.14 | Audit SGID executables | Not Applicable | | | 6.2 | User and Group Settings ||| | 6.2.1 | Ensure accounts in /etc/passwd use shadowed passwords | Pass ||
-| 6.2.2 | Ensure password fields are not empty | Pass ||
+| 6.2.2 | Ensure password fields aren't empty | Pass ||
| 6.2.3 | Ensure all groups in /etc/passwd exist in /etc/group | Pass || | 6.2.4 | Ensure all users' home directories exist | Pass || | 6.2.5 | Ensure users own their home directories | Pass || | 6.2.6 | Ensure users' home directories permissions are 750 or more restrictive | Pass ||
-| 6.2.7 | Ensure users' dot files are not group or world writable | Pass ||
+| 6.2.7 | Ensure users' dot files aren't group or world writable | Pass ||
| 6.2.8 | Ensure no users have .netrc files | Pass || | 6.2.9 | Ensure no users have .forward files | Pass || | 6.2.10 | Ensure no users have .rhosts files | Pass ||
For more information about AKS security, see the following articles:
[cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark [cis-benchmark-aks]: https://www.cisecurity.org/benchmark/kubernetes/ [cis-benchmark-ubuntu]: https://www.cisecurity.org/benchmark/ubuntu/
-[linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md
+[linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
+ Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support
+ description: Learn how to configure Azure CNI (advanced) networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)++++ Previously updated : 01/09/2023 Last updated : 04/20/2023
It offers the following benefits:
* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node. * **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
-* **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
-* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a VNet Network NAT, and using NSGs to filter traffic between node pools.
+* **High performance**: Since pod are assigned virtual network IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
+* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using an Azure NAT Gateway, and using NSGs to filter traffic between node pools.
* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution. This article shows you how to use Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS.
az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepoo
--no-wait ```
+## Monitor IP subnet usage
+
+Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below:
+
+### Get the YAML file
+
+1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
+
+2. Find **`azure_subnet_ip_usage`** in integrations. Set `enabled` to `true`.
+
+3. Save the file.
+
+### Get the AKS credentials
+
+Set the variables for subscription, resource group and cluster. Consider the following as examples:
+
+```azurecli
+
+ $s="subscriptionId"
+
+ $rg="resourceGroup"
+
+ $c="ClusterName"
+
+ az account set -s $s
+
+ az aks get-credentials -n $c -g $rg
+
+```
+
+### Apply the config
+
+1. Open terminal in the folder the downloaded **container-azm-ms-agentconfig.yaml** file is saved.
+
+2. First, apply the config using the command: `kubectl apply -f container-azm-ms-agentconfig.yaml`
+
+3. This will restart the pod and after 5-10 minutes, the metrics will be visible.
+
+4. To view the metrics on the cluster, go to Workbooks on the cluster page in the Azure portal, and find the workbook named "Subnet IP Usage". Your view will look similar to the following:
+
+ :::image type="content" source="media/configure-azure-cni-dynamic-ip-allocation/ip-subnet-usage.png" alt-text="A diagram of the Azure portal's workbook blade is shown, and metrics for an AKS cluster's subnet IP usage are displayed.":::
+ ## Dynamic allocation of IP addresses and enhanced subnet support FAQs * **Can I assign multiple pod subnets to a cluster/node pool?**
Learn more about networking in AKS in the following articles:
* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls] * [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+<!-- LINKS - External -->
+[github]: https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml
+ <!-- LINKS - Internal --> [aks-ingress-basic]: ingress-basic.md [aks-ingress-tls]: ingress-tls.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
description: Learn how to configure Azure CNI (advanced) networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. + Previously updated : 05/16/2022 Last updated : 04/20/2023
The following screenshot from the Azure portal shows an example of configuring t
:::image type="content" source="../aks/media/networking-overview/portal-01-networking-advanced.png" alt-text="Screenshot from the Azure portal showing an example of configuring these settings during AKS cluster creation.":::
-## Monitor IP subnet usage
-
-Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below:
-
-### Get the YAML file
-
-1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
-2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`.
-3. Save the file.
-
-### Get the AKS credentials
-
-Set the variables for subscription, resource group and cluster. Consider the following as examples:
-
-```azurepowershell
-
- $s="subscriptionId"
-
- $rg="resourceGroup"
-
- $c="ClusterName"
-
- az account set -s $s
-
- az aks get-credentials -n $c -g $rg
-
-```
-
-### Apply the config
-
-1. Open terminal in the folder the downloaded container-azm-ms-agentconfig.yaml file is saved.
-2. First, apply the config using the command: `kubectl apply -f container-azm-ms-agentconfig.yaml`
-3. This will restart the pod and after 5-10 minutes, the metrics will be visible.
-4. To view the metrics on the cluster, go to Workbooks on the cluster page in the Azure portal, and find the workbook named "Subnet IP Usage". Your view will look similar to the following:
-
- :::image type="content" source="media/Azure-cni/ip-subnet-usage.png" alt-text="A diagram of the Azure portal's workbook blade is shown, and metrics for an AKS cluster's subnet IP usage are displayed.":::
- ## Frequently asked questions * **Can I deploy VMs in my cluster subnet?**
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
Learn more about how KEDA works in the [official KEDA documentation][keda-archit
## Installation and version - KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm] or [Azure CLI][keda-cli]. The KEDA add-on provides a fully supported installation of KEDA that is integrated with AKS.
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
The Kubernetes Event-driven Autoscaling (KEDA) add-on integrates with features provided by Azure and open source projects. - [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!IMPORTANT]
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Last updated 3/23/2023
Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
-AKS now supports aborting a long running operation, which is now generally available. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
+AKS support for aborting long running operations is now generally available. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
The abort operation supports the following scenarios:
When you terminate an operation, it doesn't roll back to the previous state and
## Next steps
-Learn more about [Container insights](../azure-monitor/containers/container-insights-overview.md) to understand how it helps you monitor the performance and health of your Kubernetes cluster and container workloads.
+Learn more about [Container insights](../azure-monitor/containers/container-insights-overview.md) to understand how it helps you monitor the performance and health of your Kubernetes cluster and container workloads.
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
Title: Handle Linux node reboots with kured
description: Learn how to update Linux nodes and automatically reboot them with kured in Azure Kubernetes Service (AKS) Previously updated : 02/28/2019+ Last updated : 04/19/2023 #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
You need the Azure CLI version 2.0.59 or later installed and configured. Run `az
## Understand the AKS node update experience
-In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
+In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they're automatically downloaded and installed.
![AKS node update and reboot process with kured](media/node-updates-kured/node-reboot-process.png)
You can use your own workflows and processes to handle node reboots, or use `kur
### Node image upgrades
-Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every day but will remain unpatched until all checks and restarts are complete.
+Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic check every day but remains unpatched until all checks and restarts are complete.
-Alternatively, you can use node image upgrade to check for and update node images used by your cluster. For more details on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
+Alternatively, you can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
### Node upgrades
-There is an additional process in AKS that lets you *upgrade* a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates. An AKS upgrade performs the following actions:
+There's another process in AKS that lets you *upgrade* a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates. An AKS upgrade performs the following actions:
* A new node is deployed with the latest security updates and Kubernetes version applied. * An old node is cordoned and drained.
kubectl create namespace kured
helm install my-release kubereboot/kured --namespace kured --set controller.nodeSelector."kubernetes\.io/os"=linux ```
-You can also configure additional parameters for `kured`, such as integration with Prometheus or Slack. For more information about additional configuration parameters, see the [kured Helm chart][kured-install].
+You can also configure extra parameters for `kured`, such as integration with Prometheus or Slack. For more information about configuration parameters, see the [kured Helm chart][kured-install].
## Update cluster nodes
If updates were applied that require a node reboot, a file is written to */var/r
## Monitor and review reboot process
-When one of the replicas in the DaemonSet has detected that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents additional pods being scheduled on the node. The lock also indicates that only one node should be rebooted at a time. With the node cordoned off, running pods are drained from the node, and the node is rebooted.
+When one of the replicas in the DaemonSet has detected that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents more pods from being scheduled on the node. The lock also indicates that only one node should be rebooted at a time. With the node cordoned off, running pods are drained from the node, and the node is rebooted.
You can monitor the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command. The following example output shows a node with a status of *SchedulingDisabled* as the node prepares for the reboot process:
NAME STATUS ROLES AGE VERSIO
aks-nodepool1-28993262-0 Ready,SchedulingDisabled agent 1h v1.11.7 ```
-Once the update process is complete, you can view the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command with the `--output wide` parameter. This additional output lets you see a difference in *KERNEL-VERSION* of the underlying nodes, as shown in the following example output. The *aks-nodepool1-28993262-0* was updated in a previous step and shows kernel version *4.15.0-1039-azure*. The node *aks-nodepool1-28993262-1* that hasn't been updated shows kernel version *4.15.0-1037-azure*.
+Once the update process is complete, you can view the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command with the `--output wide` parameter. This output lets you see a difference in *KERNEL-VERSION* of the underlying nodes, as shown in the following example output. The *aks-nodepool1-28993262-0* was updated in a previous step and shows kernel version *4.15.0-1039-azure*. The node *aks-nodepool1-28993262-1* that hasn't been updated shows kernel version *4.15.0-1037-azure*.
```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
Title: Use the Mariner container host on Azure Kubernetes Service (AKS)
description: Learn how to use the Mariner container host on Azure Kubernetes Service (AKS) Previously updated : 12/08/2022 Last updated : 04/19/2023 # Use the Mariner container host on Azure Kubernetes Service (AKS)
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | | **HTTP/2** (Client-to-gateway) | ❌ | ❌ | ✔️ | | **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ✔️ |
+| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ❌ | ❌ |
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the default endpoint hostname; custom domain name is currently not supported.<br/>
The following table compares features available in the managed gateway versus th
| API | Managed (Dedicated) | Managed (Consumption) | Self-hosted | | | -- | -- | - | | [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ |
-| [WSDL specification)](import-soap-api.md) | ✔️ | ✔️ | ✔️ |
+| [WSDL specification](import-soap-api.md) | ✔️ | ✔️ | ✔️ |
| WADL specification | ✔️ | ✔️ | ✔️ | | [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ |
The following table compares features available in the managed gateway versus th
| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ | | [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ |
-| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ❌ |
+| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ❌ |
| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
+<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported in the Consumption tier.
+ ### Policies Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions.
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. - [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema.-- [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in
+- [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in responses against the API schema.
## Next steps For more information about working with policies, see:
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Previously updated : 12/16/2022 Last updated : 04/19/2023
In Azure API Management, *subscriptions* are the most common way for API consumers to access APIs published through an API Management instance. This article provides an overview of the concept.
+> [!NOTE]
+> An API Management subscription is used specifically to call APIs through API Management. It's not the same as an Azure subscription.
+ ## What are subscriptions? By publishing APIs through API Management, you can easily secure API access using subscription keys. Developers who need to consume the published APIs must include a valid subscription key in HTTP requests when calling those APIs. Without a valid subscription key, the calls are:
Each API Management instance comes with an immutable, all-APIs subscription (als
### Standalone subscriptions
-API Management also allows *standalone* subscriptions, which are not associated with a developer account. This feature proves useful in scenarios similar to several developers or teams sharing a subscription.
+API Management also allows *standalone* subscriptions, which aren't associated with a developer account. This feature proves useful in scenarios similar to several developers or teams sharing a subscription.
Creating a subscription without assigning an owner makes it a standalone subscription. To grant developers and the rest of your team access to the standalone subscription key, either: * Manually share the subscription key.
API publishers can [create subscriptions](api-management-howto-create-subscripti
When created in the portal, a subscription is in the **Active** state, meaning a subscriber can call an associated API using a valid subscription key. You can change the state of the subscription as needed - for example, you can suspend, cancel, or delete the subscription to prevent API access.
+## Use a subscription key
+
+A subscriber can use an API Management subscription key in one of two ways:
+
+* Add the **Ocp-Apim-Subscription-Key** HTTP header to the request, passing the value of a valid subscription key.
+
+* Include the **subscription-key** query parameter and a valid value in the URL. The query parameter is checked only if the header isn't present.
+
+> [!TIP]
+> **Ocp-Apim-Subscription-Key** is the default name of the subscription key header, and **subscription-key** is the default name of the query parameter. If desired, you may modify these names in the settings for each API. For example, in the portal, update these names on the **Settings** tab of an API.
+ ## Enable or disable subscription requirement for API or product access By default when you create an API, a subscription key is required for API access. Similarly, when you create a product, by default a subscription key is required to access any API that's added to the product. Under certain scenarios, an API publisher might want to publish a product or a particular API to the public without the requirement of subscriptions. While a publisher could choose to enable unsecured (anonymous) access to certain APIs, configuring another mechanism to secure client access is recommended.
api-management Metrics Retirement Aug 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/metrics-retirement-aug-2023.md
+
+ Title: Azure API Management - Metrics retirement (August 2023)
+description: Azure API Management is retiring five legacy metrics as of August 2023. If you monitor your API Management instance using these metrics, you must update your monitoring settings and alert rules to use the Requests metric.
+
+documentationcenter: ''
+++ Last updated : 04/20/2023+++
+# Metrics retirements (August 2023)
+
+Azure API Management integrates natively with Azure Monitor and emits metrics every minute, giving customers visibility into the state and health of their APIs. The following five legacy metrics have been deprecated since May 2019 and will no longer be available after 31 August 2023:
+
+* Total Gateway Requests
+* Successful Gateway Requests
+* Unauthorized Gateway Requests
+* Failed Gateway Requests
+* Other Gateway Requests
+
+To enable a more granular view of API traffic and better performance, API Management provides a replacement metric named **Requests**. The Requests metric has dimensions that can be used for filtering to replace the legacy metrics and also support more monitoring scenarios.
+
+From now through 31 August 2023, you can continue to use the five legacy metrics without impact. You can transition to the Requests metric at any point prior to 31 August 2023.
+
+## Is my service affected by this?
+
+While your service isn't affected by this change, any tool, script, or program that uses the five retired metrics for monitoring or alert rules is affected by this change. You'll be unable to run those tools successfully unless you update the tools.
+
+## What is the deadline for the change?
+
+The five legacy metrics will no longer be available after 31 August 2023.
+
+## Required action
+
+Update any tools that use the five legacy metrics to use equivalent functionality that is provided through the Requests metric filtered on one or more dimensions. For example, filter Requests on the **GatewayResponseCode** or **GatewayResponseCodeCategory** dimension.
+
+> [!NOTE]
+> Configure filters on the Requests metric to meet your monitoring and alerting needs. For available dimensions, see [Azure Monitor metrics for API Management](../../azure-monitor/essentials/metrics-supported.md#microsoftapimanagementservice).
++
+|Legacy metric |Example replacement with Requests metric|
+|||
+|Total Gateway Requests | Requests |
+|Successful Gateway Requests | Requests<br/> Filter: GatewayResponseCode = 0-301,304,307 |
+|Unauthorized Gateway Requests | Requests<br/> Filter: GatewayResponseCode = 401,403,429 |
+|Failed Gateway Requests | Requests<br/> Filter: GatewayResponseCode = 400,500-599 |
+|Other Gateway Requests | Requests<br/> Filter: GatewayResponseCode = (all other values) |
+
+## More information
+
+* [Tutorial: Monitor published APIs](../api-management-howto-use-azure-monitor.md)
+* [Get API analytics in Azure API Management](../howto-use-analytics.md)
+* [Observability in API Management](../observability.md)
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
Previously updated : 09/07/2022 Last updated : 03/15/2023
The following table lists all the upcoming breaking changes and feature retireme
| Change Title | Effective Date | |:-|:| | [Resource provider source IP address updates][bc1] | March 31, 2023 |
+| [Metrics retirements][metrics2023] | August 31, 2023 |
| [Resource provider source IP address updates][rp2023] | September 30, 2023 | | [API version retirements][api2023] | September 30, 2023 | | [Deprecated (legacy) portal retirement][devportal2023] | October 31, 2023 |
The following table lists all the upcoming breaking changes and feature retireme
[stv12024]: ./stv1-platform-retirement-august-2024.md [msal2025]: ./identity-provider-adal-retirement-sep-2025.md [captcha2025]: ./captcha-endpoint-change-sep-2025.md
+[metrics2023]: ./metrics-retirement-aug-2023.md
api-management Rp Source Ip Address Change Mar 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar-2023.md
On 31 March, 2023 as part of our continuing work to increase the resiliency of A
This change will have NO effect on the availability of your API Management service. However, you **may** have to take steps described below to configure your API Management service beyond 31 March, 2023.
+> These changes were completed between April 1, 2023 and April 20, 2023. You can remove the IP addresses noted in the _Old IP Address_ column from your NSG.
+ ## Is my service affected by this change? Your service is impacted by this change if:
api-management Diagnostic Logs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/diagnostic-logs-reference.md
This reference describes settings for API diagnostics logging from an API Manage
| Always log errors | boolean | If this setting is enabled, all failures are logged, regardless of the **Sampling** setting. | Log client IP address | boolean | If this setting is enabled, the client IP address for API requests is logged. | | Verbosity | | Specifies the verbosity of the logs and whether custom traces that are configured in [trace](trace-policy.md) policies are logged. <br/><br/>* Error - failed requests, and custom traces of severity `error`<br/>* Information - failed and successful requests, and custom traces of severity `error` and `information`<br/> * Verbose - failed and successful requests, and custom traces of severity `error`, `information`, and `verbose`<br/><br/>Default: Information |
-| Correlation protocol | | Specifies the protocol used to correlate telemetry sent by multiple components to Application Insights. Default: Legacy <br/><br/>For information, see [Telemetry correlation in Application Insights](../azure-monitor/app/correlation.md). |
+| Correlation protocol | | Specifies the protocol used to correlate telemetry sent by multiple components to Application Insights. Default: Legacy <br/><br/>For information, see [Telemetry correlation in Application Insights](../azure-monitor/app/distributed-tracing-telemetry-correlation.md). |
| Headers to log | list | Specifies the headers that are logged for requests and responses. Default: no headers are logged. | | Number of payload bytes to log | integer | Specifies the number of initial bytes of the body that are logged for requests and responses. Default: 0 | | Frontend Request | | Specifies whether and how *frontend requests* (requests incoming to the API Management gateway) are logged.<br/><br/> If this setting is enabled, specify **Headers to log**, **Number of payload bytes to log**, or both. |
api-management Graphql Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md
API Management helps you import, manage, protect, test, publish, and monitor Gra
* GraphQL APIs are supported in all API Management service tiers * Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway
-* GraphQL subscription support in synthetic GraphQL APIs is currently in preview
+* Support for GraphQL subscriptions in synthetic GraphQL APIs is currently in preview and isn't available in the Consumption tier
## What is GraphQL?
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
description: Learn how to protect against common API-based vulnerabilities, as i
Previously updated : 05/31/2022 Last updated : 04/13/2023
The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Fo
The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP.
+> [!NOTE]
+> In addition to following the recommendations in this article, you can enable Defender for APIs (preview), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md)
+ ## Broken object level authorization API objects that aren't protected with the appropriate level of authorization may be vulnerable to data leaks and unauthorized data manipulation through weak object access identifiers. For example, an attacker could exploit an integer object identifier, which can be iterated.
More information about this threat: [API10:2019 Insufficient logging and monito
## Next steps
+Learn more about:
+ * [Authentication and authorization in API Management](authentication-authorization-overview.md) * [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline) * [Security controls by Azure policy](security-controls-policy.md) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
+* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
api-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/index.md
| [Get X-CSRF token from SAP gateway using send request policy](./get-x-csrf-token-from-sap-gateway.md) | Shows how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. | | [Route the request based on the size of its body](./route-requests-based-on-size.md) | Demonstrates how to route requests based on the size of their bodies. | | [Send request context information to the backend service](./send-request-context-info-to-backend-service.md) | Shows how to send some context information to the backend service for logging or processing. |
-| [Set response cache duration](./set-cache-duration.md) | Demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. |
| **Outbound policies** | **Description** | | [Filter response content](./filter-response-content.md) | Demonstrates how to filter data elements from the response payload based on the product associated with the request. |
+| [Set response cache duration](./set-cache-duration.md) | Demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. |
| **On-error policies** | **Description** | | [Log errors to Stackify](./log-errors-to-stackify.md) | Shows how to add an error logging policy to send errors to Stackify for logging. |
api-management Protect With Defender For Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-defender-for-apis.md
+
+ Title: Protect APIs in API Management with Defender for APIs
+description: Learn how to enable advanced API security features in Azure API Management by using Microsoft Defender for Cloud.
+++++ Last updated : 04/20/2023++
+# Enable advanced API security features using Microsoft Defender for Cloud
+<!-- Update links to D4APIs docs when available -->
+
+Defender for APIs, a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), offers full lifecycle protection, detection, and response coverage for APIs that are managed in Azure API Management. The service empowers security practitioners to gain visibility into their business-critical APIs, understand their security posture, prioritize vulnerability fixes, and detect active runtime threats within minutes.
+
+Capabilities of Defender for APIs include:
+
+* Identify external, unused, or unauthenticated APIs
+* Classify APIs that receive or respond with sensitive data
+* Apply configuration recommendations to strengthen the security posture of APIs and API Management services
+* Detect anomalous and suspicious API traffic patterns and exploits of OWASP API top 10 vulnerabilities
+* Prioritize threat remediation
+* Integrate with SIEM systems and Defender Cloud Security Posture Management
+
+This article shows how to use the Azure portal to enable Defender for APIs from your API Management instance and view a summary of security recommendations and alerts for onboarded APIs.
++
+## Preview limitations
+
+* Currently, Defender for APIs discovers and analyzes REST APIs only.
+* Defender for APIs currently doesn't onboard APIs that are exposed using the API Management [self-hosted gateway](self-hosted-gateway-overview.md) or managed using API Management [workspaces](workspaces-overview.md).
+* Some ML-based detections and security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions in [multi-region](api-management-howto-deploy-multi-region.md) deployments. Defender for APIs relies on local data pipelines to ensure regional data residency and improved performance in such deployments.ΓÇ»
+
+
+## Prerequisites
+
+* At least one API Management instance in an Azure subscription. Defender for APIs is enabled at the level of a subscription.
+* One or more supported APIs must be imported to the API Management instance.
+* Role assignment to [enable the Defender for APIs plan](/azure/defender-for-cloud/permissions).
+* Contributor or Owner role assignment on relevant Azure subscriptions, resource groups, or API Management instances that you want to secure.
+
+## Onboard to Defender for APIs
+
+Onboarding APIs to Defender for APIs is a two-step process: enabling the Defender for APIs plan for the subscription, and onboarding unprotected APIs in your API Management instances.  
+
+> [!TIP]
+> You can also onboard to Defender for APIs directly in the Defender for Cloud interface, where more API security insights and inventory experiences are available.
++
+### Enable the Defender for APIs plan for a subscription
+
+1. Sign in to the [portal](https://portal.azure.com), and go to your API Management instance.
+
+1. In the left menu, select **Microsoft Defender for Cloud (preview)**.
+
+1. Select **Enable Defender on the subscription**.
+
+ :::image type="content" source="media/protect-with-defender-for-apis/enable-defender-for-apis.png" alt-text="Screenshot showing how to enable Defender for APIs in the portal." lightbox="media/protect-with-defender-for-apis/enable-defender-for-apis.png":::
+
+1. On the **Defender plan** page, select **On** for the **APIs** plan.
+
+1. Select **Save**.
+
+### Onboard unprotected APIs to Defender for APIs
+
+> [!CAUTION]
+> Onboarding APIs to Defender for APIs may increase compute, memory, and network utilization of your API Management instance, which in extreme cases may cause an outage of the API Management instance. Do not onboard all APIs at one time if your API Management instance is running at high utilization. Use caution by gradually onboarding APIs, while monitoring the utilization of your instance (for example, using [the capacity metric](api-management-capacity.md)) and scaling out as needed.
+
+1. In the portal, go back to your API Management instance.
+1. In the left menu, select **Microsoft Defender for Cloud (preview)**.
+1. Under **Recommendations**, select **Azure API Management APIs should be onboarded to Defender for APIs**.
+ :::image type="content" source="media/protect-with-defender-for-apis/defender-for-apis-recommendations.png" alt-text="Screenshot of Defender for APIs recommendations in the portal." lightbox="media/protect-with-defender-for-apis/defender-for-apis-recommendations.png":::
+1. On the next screen, review details about the recommendation:
+ * SeverityΓÇ»
+ * Refresh interval for security findings
+ * Description and remediation steps
+ * Affected resources, classified as **Healthy** (onboarded to Defender for APIs), **Unhealthy** (not onboarded), or **Not applicable**, along with associated metadata from API Management
+
+ > [!NOTE]
+ > Affected resources include API collections (APIs) from all API Management instances under the subscription.
+
+1. From the list of **Unhealthy** resources, select the API(s) that you wish to onboard to Defender for APIs.
+1. Select **Fix**, and then select **Fix resources**.
+ :::image type="content" source="media/protect-with-defender-for-apis/fix-unhealthy-resources.png" alt-text="Screenshot of onboarding unhealthy APIs in the portal." lightbox="media/protect-with-defender-for-apis/fix-unhealthy-resources.png":::
+1. Track the status of onboarded resources under **Notifications**.
+
+> [!NOTE]
+> Defender for APIs takes 30 minutes to generate its first security insights after onboarding an API. Thereafter, security insights are refreshed every 30 minutes.
+>
+
+## View security coverage
+
+After you onboard the APIs from API Management, Defender for APIs receives API traffic that will be used to build security insights and monitor for threats. Defender for APIs generates security recommendations for risky and vulnerable APIs.
+
+You can view a summary of all security recommendations and alerts for onboarded APIs by selecting **Microsoft Defender for Cloud (preview)** in the menu for your API Management instance:
+
+1. In the portal, go to your API Management instance and select **Microsoft Defender for Cloud (preview**) from the left menu.
+1. Review **Recommendations** and **Security insights and alerts**.
+
+ :::image type="content" source="media/protect-with-defender-for-apis/view-security-insights.png" alt-text="Screenshot of API security insights in the portal." lightbox="media/protect-with-defender-for-apis/view-security-insights.png":::
+
+For the security alerts received, Defender for APIs suggests necessary steps to perform the required analysis and validate the potential exploit or anomaly associated with the APIs. Follow the steps in the security alert to fix and return the APIs to healthy status.
+
+## Offboard protected APIs from Defender for APIs
+
+You can remove APIs from protection by Defender for APIs by using Defender for Cloud in the portal. For more information, see the Microsoft Defender for Cloud documentation.
+
+## Next steps
+
+* Learn more about [Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
+* Learn how to [upgrade and scale](upgrade-and-scale.md) an API Management instance
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
Title: NAT gateway integration - Azure App Service | Microsoft Docs
+ Title: Azure NAT Gateway integration - Azure App Service | Microsoft Docs
description: Describes how NAT gateway integrates with Azure App Service.
ms.devlang: azurecli
-# Virtual Network NAT gateway integration
+# Azure NAT Gateway integration
-NAT gateway is a fully managed, highly resilient service, which can be associated with one or more subnets and ensures that all outbound Internet-facing traffic will be routed through the gateway. With App Service, there are two important scenarios that you can use NAT gateway for.
+Azure NAT gateway is a fully managed, highly resilient service, which can be associated with one or more subnets and ensures that all outbound Internet-facing traffic will be routed through the gateway. With App Service, there are two important scenarios that you can use NAT gateway for.
The NAT gateway gives you a static predictable public IP for outbound Internet-facing traffic. It also significantly increases the available [SNAT ports](../troubleshoot-intermittent-outbound-connection-errors.md) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
-For more information and pricing. Go to the [NAT gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
+For more information and pricing. Go to the [Azure NAT Gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-overview.png" alt-text="Diagram shows Internet traffic flowing to a NAT gateway in an Azure Virtual Network."::: > [!Note]
-> * Using NAT gateway with App Service is dependent on virtual network integration, and therefore a supported App Service plan pricing tier is required.
-> * When using NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint.
-> * NAT gateway cannot be used together with App Service Environment v1 or v2.
+> * Using a NAT gateway with App Service is dependent on virtual network integration, and therefore a supported App Service plan pricing tier is required.
+> * When using a NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint.
+> * A NAT gateway cannot be used together with App Service Environment v1 or v2.
## Configuring NAT gateway integration
To configure NAT gateway integration with App Service, you need to complete the
* Ensure [Route All](../overview-vnet-integration.md#routes) is enabled for your virtual network integration so the Internet bound traffic will be affected by routes in your virtual network. * Provision a NAT gateway with a public IP and associate it with the virtual network integration subnet.
-Set up NAT gateway through the portal:
+Set up Azure NAT Gateway through the portal:
1. Go to the **Networking** UI in the App Service portal and select virtual network integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled. :::image type="content" source="./media/nat-gateway-integration/nat-gateway-route-all-enabled.png" alt-text="Screenshot of Route All enabled for virtual network integration.":::
Associate the NAT gateway with the virtual network integration subnet:
az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [myVnet] --name [myIntegrationSubnet] --nat-gateway myNATgateway ```
-## Scaling NAT gateway
+## Scaling a NAT gateway
-The same NAT gateway can be used across multiple subnets in the same Virtual Network allowing a NAT gateway to be used across multiple apps and App Service plans.
+The same NAT gateway can be used across multiple subnets in the same virtual network allowing a NAT gateway to be used across multiple apps and App Service plans.
-NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability) of NAT gateway.
+Azure NAT Gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability) of Azure NAT Gateway.
## Next steps
-For more information on the NAT gateway, see [NAT gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
+For more information on Azure NAT Gateway, see [Azure NAT Gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
For more information on virtual network integration, see [Virtual network integration documentation](../overview-vnet-integration.md).
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
adobe-target-content: ./quickstart-java-uiex
## Next steps > [!div class="nextstepaction"]
-> [Connect to Azure DB for PostgreSQL with Java](../postgresql/connect-java.md)
+> [Connect to Azure Database for PostgreSQL with Java](../postgresql/connect-java.md)
> [!div class="nextstepaction"] > [Set up CI/CD](deploy-continuous-deployment.md)
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Title: Azure Automation Hybrid Runbook Worker overview
description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 03/21/2023 Last updated : 04/20/2023
For machines hosting the system Hybrid Runbook worker managed by Update Manageme
:::image type="content" source="./media/automation-hybrid-runbook-worker/system-hybrid-runbook-worker.png" alt-text="System Hybrid Runbook Worker technical diagram":::
-When you start a runbook on a user Hybrid Runbook Worker, you specify the group that it runs on. Each worker in the group polls Azure Automation to see if any jobs are available. If a job is available, the first worker to get the job takes it. The processing time of the jobs queue depends on the hybrid worker hardware profile and load. You can't specify a particular worker. Hybrid worker works on a polling mechanism (every 30 secs) and follows an order of first-come, first-serve. Depending on when a job was pushed, whichever hybrid worker pings the Automation service picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds, then there's a high possibility another hybrid worker in the Hybrid Runbook Worker group picked up the job.
+A Hybrid Worker group with Hybrid Runbook Workers is designed for high availability and load balancing by allocating jobs across multiple Workers. For a successful execution of runbooks, Hybrid Workers must be healthy and give a heartbeat. The Hybrid worker works on a polling mechanism to pick up jobs. If none of the Workers within the Hybrid Worker group has pinged Automation service in the last 30 minutes, it implies that the group did not have any active Workers. In this scenario, jobs will get suspended after three retry attempts.
+
+When you start a runbook on a user Hybrid Runbook Worker, you specify the group it runs on and can't specify a particular worker. Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid worker within the Hybrid Worker Group pings the Automation service first picks up the job. The processing time of the jobs queue also depends on the Hybrid worker hardware profile and load.
+
+­­A single hybrid worker can generally pick up 4 jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than 4 per 30 seconds and no other Worker picks up the job, the job might get suspended with an error.
A Hybrid Runbook Worker doesn't have many of the [Azure sandbox](automation-runbook-execution.md#runbook-execution-environment) resource [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) on disk space, memory, or network sockets. The limits on a hybrid worker are only related to the worker's own resources, and they aren't constrained by the [fair share](automation-runbook-execution.md#fair-share) time limit that Azure sandboxes have.
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
Previously updated : 07/27/2021 Last updated : 04/20/2023 # Encryption of secure assets in Azure Automation
-Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are protected in Azure Automation using multiple levels of encryption. Based on the top-level key used for the encryption, there are two models for encryption:
+Azure Automation secures assets such as credentials, certificates, connections, and encrypted variables are using various levels of encryption. This helps enhance the security of these assets. Additionally, to ensure greater security and privacy for the customer code, runbooks, and DSC scripts are also encrypted. Encryption in Azure Automation follows two models, depending on the top-level key used for encryption:
- Using Microsoft-managed keys - Using keys that you manage + ## Microsoft-managed Keys By default, your Azure Automation account uses Microsoft-managed keys.
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
At a high-level, the launcher performs the following sequence of steps:
12. Attempt to use the SAS token `LOGS_STORAGE_ACCOUNT_SAS` provided to create a new Storage Account container named based on `LOGS_STORAGE_CONTAINER`, in the **pre-existing** Storage Account `LOGS_STORAGE_ACCOUNT`. If Storage Account container already exists, use it. Upload all local test results and logs to this storage container as a tarball (see below). 13. Exit.
+## Tests performed per test suite
+
+There are approximately **375** unique integration tests available, across **27** test suites - each testing a separate functionality.
+
+| Suite # | Test suite name | Description of test |
+| - | | |
+| 1 | `ad-connector` | Tests the deployment and update of an Active Directory Connector (AD Connector). |
+| 2 | `billing` | Testing various Business Critical license types are reflected in resource table in controller, used for Billing upload. |
+| 3 | `ci-billing` | Similar as `billing`, but with more CPU/Memory permutations. |
+| 4 | `ci-sqlinstance` | Long running tests for multi-replica creation, updates, GP -> BC Update, Backup validation and SQL Server Agent. |
+| 5 | `controldb` | Tests Control database - SA secret check, system login verification, audit creation, and sanity checks for SQL build version. |
+| 6 | `dc-export` | Indirect Mode billing and usage upload. |
+| 7 | `direct-crud` | Creates a SQL instance using ARM calls, validates in both Kubernetes and ARM. |
+| 8 | `direct-fog` | Creates multiple SQL instances and creates a Failover Group between them using ARM calls. |
+| 9 | `direct-hydration` | Creates SQL Instance with Kubernetes API, validates presence in ARM. |
+| 10 | `direct-upload` | Validates billing upload in Direct Mode |
+| 11 | `kube-rbac` | Ensures Kubernetes Service Account permissions for Arc Data Services matches least-privilege expectations. |
+| 12 | `nonroot` | Ensures containers run as non-root user |
+| 13 | `postgres` | Completes various Postgres creation, scaling, backup/restore tests. |
+| 14 | `release-sanitychecks` | Sanity checks for month-to-month releases, such as SQL Server Build versions. |
+| 15 | `sqlinstance` | Shorter version of `ci-sqlinstance`, for fast validations. |
+| 16 | `sqlinstance-ad` | Tests creation of SQL Instances with Active Directory Connector. |
+| 17 | `sqlinstance-credentialrotation` | Tests automated Credential Rotation for both General Purpose and Business Critical. |
+| 18 | `sqlinstance-ha` | Various High Availability Stress tests, including pod reboots, forced failovers and suspensions. |
+| 19 | `sqlinstance-tde` | Various Transparent Data Encryption tests. |
+| 20 | `telemetry-elasticsearch` | Validates Log ingestion into Elasticsearch. |
+| 21 | `telemetry-grafana` | Validates Grafana is reachable. |
+| 22 | `telemetry-influxdb` | Validates Metric ingestion into InfluxDB. |
+| 23 | `telemetry-kafka` | Various tests for Kafka using SSL, single/multi-broker setup. |
+| 24 | `telemetry-monitorstack` | Tests Monitoring components, such as `Fluentbit` and `Collectd` are functional. |
+| 25 | `telemetry-telemetryrouter` | Tests Open Telemetry. |
+| 26 | `telemetry-webhook` | Tests Data Services Webhooks with valid and invalid calls. |
+| 27 | `upgrade-arcdata` | Upgrades a full suite of SQL Instances (GP, BC 2 replica, BC 3 replica, with Active Directory) and upgrades from last month's release to latest build. |
+
+As an example, for `sqlinstance-ha`, the following tests are performed:
+
+- `test_critical_configmaps_present`: Ensures the ConfigMaps and relevant fields are present for a SQL Instance.
+- `test_suspended_system_dbs_auto_heal_by_orchestrator`: Ensures if `master` and `msdb` are suspended by any means (in this case, user). Orchestrator maintenance reconcile auto-heals it.
+- `test_suspended_user_db_does_not_auto_heal_by_orchestrator`: Ensures if a User Database is deliberately suspended by user, Orchestrator maintenance reconcile does not auto-heal it.
+- `test_delete_active_orchestrator_twice_and_delete_primary_pod`: Deletes orchestrator pod multiple times, followed by the primary replica, and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.
+- `test_delete_primary_pod`: Deletes primary replica and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.
+- `test_delete_primary_and_orchestrator_pod`: Deletes primary replica and orchestrator pod and verifies all replicas are synchronized.
+- `test_delete_primary_and_controller`: Deletes primary replica and data controller pod and verifies primary endpoint is accessible and the new primary replica is synchronized. Failover time expectations for 2 replica are relaxed.
+- `test_delete_one_secondary_pod`: Deletes secondary replica and data controller pod and verifies all replicas are synchronized.
+- `test_delete_two_secondaries_pods`: Deletes secondary replicas and data controller pod and verifies all replicas are synchronized.
+- `test_delete_controller_orchestrator_secondary_replica_pods`:
+- `test_failaway`: Forces AG failover away from current primary, ensures the new primary is not the same as the old primary. Verifies all replicas are synchronized.
+- `test_update_while_rebooting_all_non_primary_replicas`: Tests Controller-driven updates are resilient with retries despite various turbulent circumstances.
+
+> [!NOTE]
+> Certain tests may require specific hardware, such as privileged Access to Domain Controllers for `ad` tests for Account and DNS entry creation - which may not be available in all environments looking to use the `arc-ci-launcher`.
+ ## Examining Test Results A sample storage container and file uploaded by the launcher:
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 03/06/2023--- Last updated : 04/21/2023+ # Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters
Capabilities of the Azure Key Vault Secrets Provider extension include:
## Install the Azure Key Vault Secrets Provider extension on an Arc-enabled Kubernetes cluster
-You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying ARM template.
+You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying an ARM template.
> [!TIP] > If the cluster is behind an outbound proxy server, ensure that you connect it to Azure Arc using the [proxy configuration](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) option before installing the extension.
You can install the Azure Key Vault Secrets Provider extension on your connected
az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider ```
-You should see output similar to this example. Note that it may take several minutes before the secrets provider Helm chart is deployed to the cluster.
+You should see output similar to this example. It may take several minutes before the secrets provider Helm chart is deployed to the cluster.
```json {
You should see output similar to this example.
## Create or select an Azure Key Vault
-Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your Key Vault must be globally unique.
+Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your key vault must be globally unique.
Set the following environment variables:
export AZUREKEYVAULT_NAME=<AKV-name>
export AZUREKEYVAULT_LOCATION=<AKV-location> ```
-Next, run the following command
+Next, run the following command:
```azurecli az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed
After the pod starts, the mounted content at the volume path specified in your deployment YAML is available.
-```Bash
+```bash
## show secrets held in secrets-store kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
The following configuration settings are frequently used with the Azure Key Vaul
| Configuration Setting | Default | Description | | | -- | -- | | enableSecretRotation | false | Boolean type. If `true`, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store |
-| rotationPollInterval | 2m | If `enableSecretRotation` is `true`, specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. |
+| rotationPollInterval | 2 m | If `enableSecretRotation` is `true`, specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. |
| syncSecret.enabled | false | Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If `true`, `SecretProviderClass` allows the `secretObjects` field to define the desired state of the synced Kubernetes Secret objects. | These settings can be specified when the extension is installed by using the `az k8s-extension create` command:
You can use other configuration settings as needed for your deployment. For exam
az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet ``` - ## Uninstall the Azure Key Vault Secrets Provider extension To uninstall the extension, run the following command:
az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_N
If the extension was successfully removed, you won't see the Azure Key Vault Secrets Provider extension listed in the output. If you don't have any other extensions installed on your cluster, you'll see an empty array.
+If you no longer need it, be sure to delete the Kubernetes secret associated with the service principal by running the following command:
+
+```bash
+kubectl delete secret secrets-store-creds
+```
+ ## Reconciliation and troubleshooting The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component will be reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-extension create` command again with the existing extension instance name.
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
def main(changes):
## Attributes
-The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
| Attribute property |Description| |||
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourc
``` Press return to execute the code. You should see a 200 response, and details about the table you just created will show up. To validate that the table was created go to your workspace and select Tables on the left blade. You should see your table in the list.
+> [!Note]
+> The column names are case sensitive. For example Rawdata will not correcly collect the event data. It must be RawData.
## Create data collection rule to collect text logs
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
+
+ Title: Application Insights with containers
+description: This article shows you how to set-up Application Insights
+ Last updated : 04/06/2023
+ms.devlang: java
++++
+# Get Started (Supplemental)
+
+In the following sections, you will find information on how to get Java auto-instrumentation for specific technical environments.
+
+## Azure App Service
+
+For more information, see [Application monitoring for Azure App Service and Java](./azure-web-apps-java.md).
+
+## Azure Functions
+
+For more information, see [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-preview).
+
+## Containers
+
+### Docker entry point
+
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+
+```
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.11.jar", "-jar", "<myapp.jar>"]
+```
+
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` somewhere before `-jar`, for example:
+
+```
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.11.jar" -jar <myapp.jar>
+```
++
+### Docker file
+
+A Dockerfile example:
+
+```
+FROM ...
+
+COPY target/*.jar app.jar
+
+COPY agent/applicationinsights-agent-3.4.11.jar applicationinsights-agent-3.4.11.jar
+
+COPY agent/applicationinsights.json applicationinsights.json
+
+ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
+
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.11.jar", "-jar", "app.jar"]
+```
+
+### Third-party container images
+
+If you're using a third-party container image that you can't modify, mount the Application Insights Java agent jar into the container from outside. Set the environment variable for the container
+`JAVA_TOOL_OPTIONS=-javaagent:/path/to/applicationinsights-agent.jar`.
+
+## Spring Boot
+
+For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
+
+## Java Application servers
+
+### Tomcat 8 (Linux)
+
+#### Tomcat installed via apt-get or yum
+
+If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file:
+
+```
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+```
+
+#### Tomcat installed via download and unzip
+
+If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content:
+
+```
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+```
+
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to `CATALINA_OPTS`.
+
+### Tomcat 8 (Windows)
+
+#### Run Tomcat from the command line
+
+Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content:
+
+```
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.11.jar
+```
+
+Quotes aren't necessary, but if you want to include them, the proper placement is:
+
+```
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+```
+
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to `CATALINA_OPTS`.
+
+#### Run Tomcat as a Windows service
+
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the `Java Options` under the `Java` tab.
+
+### JBoss EAP 7
+
+#### Standalone server
+
+Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+
+```java ...
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.11.jar -Xms1303m -Xmx1303m ..."
+ ...
+```
+
+#### Domain server
+
+Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+
+```xml
+...
+<jvms>
+ <jvm name="default">
+ <heap size="64m" max-size="256m"/>
+ <jvm-options>
+ <option value="-server"/>
+ <!--Add Java agent jar file here-->
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.11.jar"/>
+ <option value="-XX:MetaspaceSize=96m"/>
+ <option value="-XX:MaxMetaspaceSize=256m"/>
+ </jvm-options>
+ </jvm>
+</jvms>
+...
+```
+
+If you're running multiple managed servers on a single host, you'll need to add `applicationinsights.agent.id` to the `system-properties` for each `server`:
+
+```xml
+...
+<servers>
+ <server name="server-one" group="main-server-group">
+ <!--Edit system properties for server-one-->
+ <system-properties>
+ <property name="applicationinsights.agent.id" value="..."/>
+ </system-properties>
+ </server>
+ <server name="server-two" group="main-server-group">
+ <socket-bindings port-offset="150"/>
+ <!--Edit system properties for server-two-->
+ <system-properties>
+ <property name="applicationinsights.agent.id" value="..."/>
+ </system-properties>
+ </server>
+</servers>
+...
+```
+
+The specified `applicationinsights.agent.id` value must be unique. It's used to create a subdirectory under the Application Insights directory. Each JVM process needs its own local Application Insights config and local Application Insights log file. Also, if reporting to the central collector, the `applicationinsights.properties` file is shared by the multiple managed servers, so the specified `applicationinsights.agent.id` is needed to override the `agent.id` setting in that shared file. The `applicationinsights.agent.rollup.id` can be similarly specified in the server's `system-properties` if you need to override the `agent.rollup.id` setting per managed server.
+
+### Jetty 9
+
+Add these lines to `start.ini`:
+
+```
+--exec
+-javaagent:path/to/applicationinsights-agent-3.4.11.jar
+```
+
+### Payara 5
+
+Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+
+```xml
+...
+<java-config ...>
+ <!--Edit the JVM options here-->
+ <jvm-options>
+ -javaagent:path/to/applicationinsights-agent-3.4.11.jar>
+ </jvm-options>
+ ...
+</java-config>
+...
+```
+
+### WebSphere 8
+
+1. Open Management Console.
+1. Go to **Servers** > **WebSphere application servers** > **Application servers**. Choose the appropriate application servers and select:
+
+ ```
+ Java and Process Management > Process definition > Java Virtual Machine
+ ```
+
+1. In `Generic JVM arguments`, add the following JVM argument:
+
+ ```
+ -javaagent:path/to/applicationinsights-agent-3.4.11.jar
+ ```
+
+1. Save and restart the application server.
+
+### OpenLiberty 18
+
+Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line:
+
+```
+-javaagent:path/to/applicationinsights-agent-3.4.11.jar
+```
+
+### Others
+
+See your application server documentation on how to add JVM args.
+
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
In this article, we cover the Click Analytics plug-in that automatically tracks
## Get started
-Users can set up the Click Analytics Autocollection plug-in via npm.
+Users can set up the Click Analytics Auto-Collection plug-in via snippet or NPM.
-### npm setup
-
-Install the npm package:
-
-```bash
-npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web
-```
-
-```js
-
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
-
-const clickPluginInstance = new ClickAnalyticsPlugin();
-// Click Analytics configuration
-const clickPluginConfig = {
- autoCapture: true
-};
-// Application Insights Configuration
-const configObj = {
- connectionString: "YOUR CONNECTION STRING",
- extensions: [clickPluginInstance],
- extensionConfig: {
- [clickPluginInstance.identifier]: clickPluginConfig
- },
-};
-
-const appInsights = new ApplicationInsights({ config: configObj });
-appInsights.loadAppInsights();
-```
-
-## Snippet setup
+### Snippet setup
Ignore this setup if you use the npm setup.
Ignore this setup if you use the npm setup.
</script> ```
+### npm setup
+
+Install the npm package:
+
+```bash
+npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web
+```
+
+```js
+
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
+
+const clickPluginInstance = new ClickAnalyticsPlugin();
+// Click Analytics configuration
+const clickPluginConfig = {
+ autoCapture: true
+};
+// Application Insights Configuration
+const configObj = {
+ connectionString: "YOUR CONNECTION STRING",
+ extensions: [clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ },
+};
+
+const appInsights = new ApplicationInsights({ config: configObj });
+appInsights.loadAppInsights();
+```
+ ## Use the plug-in 1. Telemetry data generated from the click events are stored as `customEvents` in the Application Insights section of the Azure portal.
The following key properties are captured by default when the plug-in is enabled
| | |--| | timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 |
-## Configuration
+## Advanced configuration
| Name | Type | Default | Description | | | --| --| - | | auto-Capture | Boolean | True | Automatic capture configuration. | | callback | [IValueCallback](#ivaluecallback) | Null | Callbacks configuration. |
-| pageTags | String | Null | Page tags. |
+| pageTags | Object | Null | Page tags. |
| dataTags | [ICustomDataTags](#icustomdatatags)| Null | Custom Data Tags provided to override default tags used to capture click data. | | urlCollectHash | Boolean | False | Enables the logging of values after a "#" character of the URL. | | urlCollectQuery | Boolean | False | Enables the logging of the query string of the URL. |
var appInsights = new Microsoft.ApplicationInsights.ApplicationInsights({
appInsights.loadAppInsights(); ```
-## Enable correlation
-
-Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-
-JavaScript correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
- ## Sample app [Simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Java auto-instrumentation is enabled through configuration changes; no code chan
Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to your application's JVM args. > [!TIP]
-> For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
-
-If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
+> For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
+
+> [!TIP]
+> If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
##### [Node.js](#tab/nodejs)
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
To create a custom workbook based on any of these workbooks, select the **View W
- IPs assigned to a pod. >[!NOTE]
-> By default 16 IP's are allocated from subnet to each node. This cannot be modified to be less than 16. For instructions on how to enable subnet IP usage metrics, see [Monitor IP Subnet Usage](../../aks/configure-azure-cni.md#monitor-ip-subnet-usage).
+> By default 16 IP's are allocated from subnet to each node. This cannot be modified to be less than 16. For instructions on how to enable subnet IP usage metrics, see [Monitor IP Subnet Usage](../../aks/configure-azure-cni-dynamic-ip-allocation.md#monitor-ip-subnet-usage).
## Resource Monitoring workbooks
azure-monitor App Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-expression.md
description: The app expression is used in an Azure Monitor log query to retriev
Previously updated : 08/06/2022 Last updated : 04/20/2023
The `app` expression is used in an Azure Monitor query to retrieve data from a s
| Identifier | Description | Example |:|:|:|
-| Resource Name | Human readable name of the app (Also known as "component name") | app("fabrikamapp") |
-| Qualified Name | Full name of the app in the form: "subscriptionName/resourceGroup/componentName" | app('AI-Prototype/Fabrikam/fabrikamapp') |
-| ID | GUID of the app | app("988ba129-363e-4415-8fe7-8cbab5447518") |
-| Azure Resource ID | Identifier for the Azure resource |app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
+| ID | GUID of the app | app("00000000-0000-0000-0000-000000000000") |
+| Azure Resource ID | Identifier for the Azure resource |app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
## Notes * You must have read access to the application.
-* Identifying an application by its name assumes that it is unique across all accessible subscriptions. If you have multiple applications with the specified name, the query will fail because of the ambiguity. In this case you must use one of the other identifiers.
+* Identifying an application by its ID or Azure Resource ID is strongly recommended since unique, removes ambiguity, and more performant.
* Use the related expression [workspace](../logs/workspace-expression.md) to query across Log Analytics workspaces. ## Examples ```Kusto
-app("fabrikamapp").requests | count
+app("00000000-0000-0000-0000-000000000000").requests | count
``` ```Kusto
-app("AI-Prototype/Fabrikam/fabrikamapp").requests | count
-```
-```Kusto
-app("b438b4f6-912a-46d5-9cb1-b44069212ab4").requests | count
-```
-```Kusto
-app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
+app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
``` ```Kusto union
-(workspace("myworkspace").Heartbeat | where Computer contains "Con"),
-(app("myapplication").requests | where cloud_RoleInstance contains "Con")
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
+(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myColumnInstance")
| count ``` ```Kusto union
-(workspace("myworkspace").Heartbeat), (app("myapplication").requests)
-| where TimeGenerated between(todatetime("2018-02-08 15:00:00") .. todatetime("2018-12-08 15:05:00"))
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests)
+| where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
``` ## Next steps
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Previously updated : 11/09/2022 Last updated : 04/17/2023 # Set a table's log data plan to Basic or Analytics
Configure a table for Basic logs if:
| Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+ | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) |
| Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) | | Health Data | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) |
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
You can identify a workspace in one of several ways:
### Identify an application The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights.
-You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following names or IDs:
+You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following IDs:
* **ID**: This ID is the app GUID of the application.
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
Optimized queries will:
- Run faster and reduce overall duration of the query execution. - Have smaller chance of being throttled or rejected.
-Pay particular attention to queries that are used for recurrent and bursty usage, such as dashboards, alerts, Azure Logic Apps, and Power BI. The impact of an ineffective query in these cases is substantial.
+Pay particular attention to queries that are used for recurrent and simultaneous usage, such as dashboards, alerts, Azure Logic Apps, and Power BI. The impact of an ineffective query in these cases is substantial.
Here's a detailed video walkthrough on optimizing queries.
A query that spans more than five workspaces is considered a query that consumes
> [!IMPORTANT] > - In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
-> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Resource Manager resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
+> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Azure Resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
## Parallelism Azure Monitor Logs uses large clusters of Azure Data Explorer to run queries. These clusters vary in scale and potentially get up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
Title: Save a query in Azure Monitor Log Analytics (preview)
+ Title: Save a query in Azure Monitor Log Analytics
description: This article describes how to save a query in Log Analytics.
Last updated 06/22/2022
-# Save a query in Azure Monitor Log Analytics (preview)
+# Save a query in Azure Monitor Log Analytics
[Log queries](log-query-overview.md) are requests in Azure Monitor that you can use to process and retrieve data in a Log Analytics workspace. Saving a log query allows you to: - Use the query in all Log Analytics contexts, including workspace and resource centric.
azure-monitor Workspace Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-expression.md
description: The workspace expression is used in an Azure Monitor log query to r
Previously updated : 08/06/2022 Last updated : 04/20/2023
The `workspace` expression is used in an Azure Monitor query to retrieve data fr
| Identifier | Description | Example |:|:|:|
-| Resource Name | Human readable name of the workspace (also known as "component name") | workspace("contosoretail") |
-| Qualified Name | Full name of the workspace in the form: "subscriptionName/resourceGroup/componentName" | workspace('Contoso/ContosoResource/ContosoWorkspace') |
-| ID | GUID of the workspace | workspace("b438b3f6-912a-46d5-9db1-b42069242ab4") |
-| Azure Resource ID | Identifier for the Azure resource | workspace("/subscriptions/e4227-645-44e-9c67-3b84b5982/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail") |
+| ID | GUID of the workspace | workspace("00000000-0000-0000-0000-000000000000") |
+| Azure Resource ID | Identifier for the Azure resource | workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail") |
## Notes * You must have read access to the workspace.
+* Identifying a workspace by its ID or Azure Resource ID is strongly recommended since unique, removes ambiguity, and more performant.
* A related expression is `app` that allows you to query across Application Insights applications. ## Examples ```Kusto
-workspace("contosoretail").Update | count
+workspace("00000000-0000-0000-0000-000000000000").Update | count
``` ```Kusto
-workspace("b438b4f6-912a-46d5-9cb1-b44069212ab4").Update | count
-```
-```Kusto
-workspace("/subscriptions/e427267-5645-4c4e-9c67-3b84b59a6982/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count
+workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count
``` ```Kusto union
-(workspace("myworkspace").Heartbeat | where Computer contains "Con"),
-(app("myapplication").requests | where cloud_RoleInstance contains "Con")
+( workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
+(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myRoleInstance")
| count ``` ```Kusto union
-(workspace("myworkspace").Heartbeat), (app("myapplication").requests)
-| where TimeGenerated between(todatetime("2018-02-08 15:00:00") .. todatetime("2018-12-08 15:05:00"))
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests) | where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
``` ## Next steps
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
For general Profiler troubleshooting, refer to the [Profiler Troubleshoot documentation](profiler-troubleshooting.md).
-For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
## Frequently asked questions
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
For more information, refer to [Network security: Configure encryption types allowed for Kerberos](/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) or [Windows Configurations for Kerberos Supported Encryption Types](/archive/blogs/openspecification/windows-configurations-for-kerberos-supported-encryption-type)
+* LDAP queries take effect only in the domain specified in the Active Directory connections (the **AD DNS Domain Name** field). This behavior applies to NFS, SMB, and dual-protocol volumes.
+ ## Create an Active Directory connection 1. From your NetApp account, select **Active Directory connections**, then select **Join**.
azure-netapp-files Faq Data Migration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-data-migration-protection.md
You can also use a wide array of free tools to copy data. For NFS, you can use w
The requirements for replicating an Azure NetApp Files volume to another Azure region are as follows: - Ensure Azure NetApp Files is available in the target Azure region.-- Validate network connectivity between VNets in each region. Currently, global peering between VNets is not supported. You can establish connectivity between VNets by linking with an ExpressRoute circuit or using a S2S VPN connection.
+- Validate network connectivity between the source and the Azure NetApp Files target volume IP address. Data transfer between on premises and Azure NetApp Files volumes, or across Azure regions, is supported via [site-to-site VPN and ExpressRoute](azure-netapp-files-network-topologies.md#hybrid-environments), [Global VNet peering](azure-netapp-files-network-topologies.md#global-or-cross-region-vnet-peering), or [Azure Virtual WAN connections](configure-virtual-wan.md).
- Create the target Azure NetApp Files volume. - Transfer the source data to the target volume by using your preferred file copy tool.
azure-resource-manager Bicep Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-scope.md
A common use of the resourceGroup function is to create resources in the same lo
param location string = resourceGroup().location ```
-You can also use the resourceGroup function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources.md#apply-tags-from-resource-group).
+You can also use the resourceGroup function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources-bicep.md#apply-tags-from-resource-group).
## subscription
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-declaration.md
az provider show \
## Tags
-You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources.md#arm-templates).
+You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources-bicep.md).
## Managed identities for Azure resources
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
mrgpath="/subscriptions/$subid/resourceGroups/$mrgname"
The `mrgprefix` and `mrgtimestamp` variables are concatenated to create a managed resource group name like _mrg-sampleManagedApplication-20230310100148_ that's stored in the `mrgname` variable. The name's format:`mrg-{definitionName}-{dateTime}` is the same format as the portal's default value. The `mrgname` and `subid` variable's are concatenated to create the `mrgpath` variable value that creates the managed resource group during the deployment.
-You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. The PowerShell escape character for the quote marks is the backslash (`\`) character. The backslash is also used for line continuation so that commands can use multiple lines.
+You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. In Bash, the escape character for the quote marks is the backslash (`\`) character. The backslash is also used for line continuation so that commands can use multiple lines.
The JSON formatted string's syntax is as follows:
The JSON formatted string's syntax is as follows:
"{ \"parameterName\": {\"value\":\"parameterValue\"}, \"parameterName\": {\"value\":\"parameterValue\"} }" ```
-For readability, the completed JSON string uses the backtick for line continuation. The values are stored in the `params` variable that's used in the deployment command. The parameters in the JSON string are required to deploy the managed resources.
+For readability, the completed JSON string uses the backslash for line continuation. The values are stored in the `params` variable that's used in the deployment command. The parameters in the JSON string are required to deploy the managed resources.
```azurecli params="{ \"appServicePlanName\": {\"value\":\"demoAppServicePlan\"}, \
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
Title: Delete resource group and resources
description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Last updated 04/10/2023-+ # Azure Resource Manager resource group and resource deletion
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock
description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Last updated 04/06/2023-+ # Lock your resources to protect your infrastructure
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resource groups
-You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md#azure-cli).
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources-cli.md).
## Export resource groups to templates
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
For more information, see [Lock resources to prevent unexpected changes](lock-re
## Tag resource groups
-You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md#portal).
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources-portal.md).
## Export resource groups to templates
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resource groups
-You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md#powershell).
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources-powershell.md).
## Export resource groups to templates
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
Title: Manage resource groups - Python
description: Use Python to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. -+ Last updated 02/27/2023
azure-resource-manager Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-cli.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resources
-Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources.md#azure-cli).
+Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources-cli.md).
## Manage access to resources
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md
Tagging helps organizing your resource group and resources logically.
![tag azure resource](./media/manage-resources-portal/manage-azure-resources-portal-tag-resource.png) 3. Specify the tag properties, and then select **Save**.
-For information, see [Using tags to organize your Azure resources](tag-resources.md#portal).
+For information, see [Using tags to organize your Azure resources](tag-resources-portal.md).
## Monitor resources
azure-resource-manager Manage Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-powershell.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resources
-Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources.md#powershell).
+Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources-powershell.md).
## Manage access to resources
azure-resource-manager Tag Resources Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-bicep.md
+
+ Title: Tag resources, resource groups, and subscriptions with Bicep
+description: Shows how to use Bicep to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Bicep
+
+This article describes how to use Bicep to tag resources, resource groups, and subscriptions during deployment. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+> [!NOTE]
+> The tags you apply through a Bicep file overwrite any existing tags.
+
+## Apply values
+
+The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
+
+```Bicep
+param location string = resourceGroup().location
+param utcShort string = utcNow('d')
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ Dept: 'Finance'
+ Environment: 'Production'
+ LastDeployed: utcShort
+ }
+}
+```
+
+## Apply an object
+
+You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
+
+```Bicep
+param location string = resourceGroup().location
+param tagValues object = {
+ Dept: 'Finance'
+ Environment: 'Production'
+}
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: tagValues
+}
+```
+
+## Apply a JSON string
+
+To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
+
+```Bicep
+param location string = resourceGroup().location
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ CostCenter: '{"Dept":"Finance","Environment":"Production"}'
+ }
+}
+```
+
+## Apply tags from resource group
+
+To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+
+```Bicep
+param location string = resourceGroup().location
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ Dept: resourceGroup().tags['Dept']
+ Environment: resourceGroup().tags['Environment']
+ }
+}
+```
+
+## Apply tags to resource groups or subscriptions
+
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
+
+```Bicep
+param tagName string = 'TeamName'
+param tagValue string = 'AppTeam1'
+
+resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
+ name: 'default'
+ properties: {
+ tags: {
+ '${tagName}': tagValue
+ }
+ }
+}
+```
+
+The following Bicep adds the tags from an object to the subscription it's deployed to. For more information about subscription deployments, see [Create resource groups and resources at the subscription level](../bicep/deploy-to-subscription.md).
+
+```Bicep
+targetScope = 'subscription'
+
+param tagObject object = {
+ TeamName: 'AppTeam1'
+ Dept: 'Finance'
+ Environment: 'Production'
+}
+
+resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
+ name: 'default'
+ properties: {
+ tags: tagObject
+ }
+}
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-cli.md
+
+ Title: Tag resources, resource groups, and subscriptions with Azure CLI
+description: Shows how to use Azure CLI to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Azure CLI
+
+This article describes how to use Azure CLI to tag resources, resource groups, and subscriptions. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+## Apply tags
+
+Azure CLI offers two commands to apply tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You need to have the Azure CLI 2.10.0 version or later. You can check your version with `az version`. To update or install it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+The `az tag create` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
+
+The following example applies a set of tags to a storage account:
+
+```azurecli-interactive
+resource=$(az resource show -g demoGroup -n demostorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
+az tag create --resource-id $resource --tags Dept=Finance Status=Normal
+```
+
+When the command completes, notice that the resource has two tags.
+
+```output
+"properties": {
+ "tags": {
+ "Dept": "Finance",
+ "Status": "Normal"
+ }
+},
+```
+
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
+
+```azurecli-interactive
+az tag create --resource-id $resource --tags Team=Compliance Environment=Production
+```
+
+```output
+"properties": {
+ "tags": {
+ "Environment": "Production",
+ "Team": "Compliance"
+ }
+},
+```
+
+To add tags to a resource that already has tags, use `az tag update`. Set the `--operation` parameter to `Merge`.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Merge --tags Dept=Finance Status=Normal
+```
+
+Notice that the existing tags grow with the addition of the two new tags.
+
+```output
+"properties": {
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "Status": "Normal",
+ "Team": "Compliance"
+ }
+},
+```
+
+Each tag name can have only one value. If you provide a new value for a tag, the new tag replaces the old value, even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Merge --tags Status=Green
+```
+
+```output
+"properties": {
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "Status": "Green",
+ "Team": "Compliance"
+ }
+},
+```
+
+When you set the `--operation` parameter to `Replace`, the new set of tags replaces the existing tags.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Replace --tags Project=ECommerce CostCenter=00123 Team=Web
+```
+
+Only the new tags remain on the resource.
+
+```output
+"properties": {
+ "tags": {
+ "CostCenter": "00123",
+ "Project": "ECommerce",
+ "Team": "Web"
+ }
+},
+```
+
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
+
+To add a new set of tags to a resource group, use:
+
+```azurecli-interactive
+group=$(az group show -n demoGroup --query id --output tsv)
+az tag create --resource-id $group --tags Dept=Finance Status=Normal
+```
+
+To update the tags for a resource group, use:
+
+```azurecli-interactive
+az tag update --resource-id $group --operation Merge --tags CostCenter=00123 Environment=Production
+```
+
+To add a new set of tags to a subscription, use:
+
+```azurecli-interactive
+sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
+az tag create --resource-id /subscriptions/$sub --tags CostCenter=00123 Environment=Dev
+```
+
+To update the tags for a subscription, use:
+
+```azurecli-interactive
+az tag update --resource-id /subscriptions/$sub --operation Merge --tags Team="Web Apps"
+```
+
+## List tags
+
+To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass the resource ID of the entity.
+
+To see the tags for a resource, use:
+
+```azurecli-interactive
+resource=$(az resource show -g demoGroup -n demostorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
+az tag list --resource-id $resource
+```
+
+To see the tags for a resource group, use:
+
+```azurecli-interactive
+group=$(az group show -n demoGroup --query id --output tsv)
+az tag list --resource-id $group
+```
+
+To see the tags for a subscription, use:
+
+```azurecli-interactive
+sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
+az tag list --resource-id /subscriptions/$sub
+```
+
+## List by tag
+
+To get resources that have a specific tag name and value, use:
+
+```azurecli-interactive
+az resource list --tag CostCenter=00123 --query [].name
+```
+
+To get resources that have a specific tag name with any tag value, use:
+
+```azurecli-interactive
+az resource list --tag Team --query [].name
+```
+
+To get resource groups that have a specific tag name and value, use:
+
+```azurecli-interactive
+az group list --tag Dept=Finance
+```
+
+## Remove tags
+
+To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass the resource ID of the tags you want to delete.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Delete --tags Project=ECommerce Team=Web
+```
+
+You've removed the specified tags.
+
+```output
+"properties": {
+ "tags": {
+ "CostCenter": "00123"
+ }
+},
+```
+
+To remove all tags, use the [az tag delete](/cli/azure/tag#az-tag-delete) command.
+
+```azurecli-interactive
+az tag delete --resource-id $resource
+```
+
+## Handling spaces
+
+If your tag names or values include spaces, enclose them in quotation marks.
+
+```azurecli-interactive
+az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US"
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-portal.md
+
+ Title: Tag resources, resource groups, and subscriptions with Azure portal
+description: Shows how to use Azure portal to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Azure portal
+
+This article describes how to use the Azure portal to tag resources. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+## Add tags
+
+If a user doesn't have the required access for adding tags, you can assign the **Tag Contributor** role to the user. For more information, see [Tutorial: Grant a user access to Azure resources using RBAC and the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md).
+
+1. To view the tags for a resource or a resource group, look for existing tags in the overview. If you have not previously applied tags, the list is empty.
+
+ ![View tags for resource or resource group](./media/tag-resources-portal/view-tags.png)
+
+1. To add a tag, select **Click here to add tags**.
+
+1. Provide a name and value.
+
+ ![Add tag](./media/tag-resources-portal/add-tag.png)
+
+1. Continue adding tags as needed. When done, select **Save**.
+
+ ![Save tags](./media/tag-resources-portal/save-tags.png)
+
+1. The tags are now displayed in the overview.
+
+ ![Show tags](./media/tag-resources-portal/view-new-tags.png)
+
+## Edit tags
+
+1. To add or delete a tag, select **change**.
+
+1. To delete a tag, select the trash icon. Then, select **Save**.
+
+ ![Delete tag](./media/tag-resources-portal/delete-tag.png)
+
+## Add tags to multiple resources
+
+To bulk assign tags to multiple resources:
+
+1. From any list of resources, select the checkbox for the resources you want to assign the tag. Then, select **Assign tags**.
+
+ ![Select multiple resources](./media/tag-resources-portal/select-multiple-resources.png)
+
+1. Add names and values. When done, select **Save**.
+
+ ![Select assign](./media/tag-resources-portal/select-assign.png)
+
+## View resources by tag
+
+To view all resources with a tag:
+
+1. On the Azure portal menu, search for **tags**. Select it from the available options.
+
+ ![Find by tag](./media/tag-resources-portal/find-tags-general.png)
+
+1. Select the tag for viewing resources.
+
+ ![Select tag](./media/tag-resources-portal/select-tag.png)
+
+1. All resources with that tag are displayed.
+
+ ![View resources by tag](./media/tag-resources-portal/view-resources-by-tag.png)
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-powershell.md
+
+ Title: Tag resources, resource groups, and subscriptions with Azure PowerShell
+description: Shows how to use Azure PowerShell to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Azure PowerShell
+
+This article describes how to use Azure PowerShell to tag resources, resource groups, and subscriptions. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+## Apply tags
+
+Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
+
+The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
+
+The following example applies a set of tags to a storage account:
+
+```azurepowershell-interactive
+$tags = @{"Dept"="Finance"; "Status"="Normal"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+New-AzTag -ResourceId $resource.id -Tag $tags
+```
+
+When the command completes, notice that the resource has two tags.
+
+```output
+Properties :
+ Name Value
+ ====== =======
+ Dept Finance
+ Status Normal
+```
+
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
+
+```azurepowershell-interactive
+$tags = @{"Team"="Compliance"; "Environment"="Production"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+New-AzTag -ResourceId $resource.id -Tag $tags
+```
+
+```output
+Properties :
+ Name Value
+ =========== ==========
+ Environment Production
+ Team Compliance
+```
+
+To add tags to a resource that already has tags, use `Update-AzTag`. Set the `-Operation` parameter to `Merge`.
+
+```azurepowershell-interactive
+$tags = @{"Dept"="Finance"; "Status"="Normal"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
+```
+
+Notice that the existing tags grow with the addition of the two new tags.
+
+```output
+Properties :
+ Name Value
+ =========== ==========
+ Status Normal
+ Dept Finance
+ Team Compliance
+ Environment Production
+```
+
+Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+
+```azurepowershell-interactive
+$tags = @{"Status"="Green"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
+```
+
+```output
+Properties :
+ Name Value
+ =========== ==========
+ Status Green
+ Dept Finance
+ Team Compliance
+ Environment Production
+```
+
+When you set the `-Operation` parameter to `Replace`, the new set of tags replaces the existing tags.
+
+```azurepowershell-interactive
+$tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Replace
+```
+
+Only the new tags remain on the resource.
+
+```output
+Properties :
+ Name Value
+ ========== =========
+ CostCenter 00123
+ Team Web
+ Project ECommerce
+```
+
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
+
+To add a new set of tags to a resource group, use:
+
+```azurepowershell-interactive
+$tags = @{"Dept"="Finance"; "Status"="Normal"}
+$resourceGroup = Get-AzResourceGroup -Name demoGroup
+New-AzTag -ResourceId $resourceGroup.ResourceId -tag $tags
+```
+
+To update the tags for a resource group, use:
+
+```azurepowershell-interactive
+$tags = @{"CostCenter"="00123"; "Environment"="Production"}
+$resourceGroup = Get-AzResourceGroup -Name demoGroup
+Update-AzTag -ResourceId $resourceGroup.ResourceId -Tag $tags -Operation Merge
+```
+
+To add a new set of tags to a subscription, use:
+
+```azurepowershell-interactive
+$tags = @{"CostCenter"="00123"; "Environment"="Dev"}
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+New-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags
+```
+
+To update the tags for a subscription, use:
+
+```azurepowershell-interactive
+$tags = @{"Team"="Web Apps"}
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+Update-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags -Operation Merge
+```
+
+You may have more than one resource with the same name in a resource group. In that case, you can set each resource with the following commands:
+
+```azurepowershell-interactive
+$resource = Get-AzResource -ResourceName sqlDatabase1 -ResourceGroupName examplegroup
+$resource | ForEach-Object { Update-AzTag -Tag @{ "Dept"="IT"; "Environment"="Test" } -ResourceId $_.ResourceId -Operation Merge }
+```
+
+## List tags
+
+To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass the resource ID of the entity.
+
+To see the tags for a resource, use:
+
+```azurepowershell-interactive
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Get-AzTag -ResourceId $resource.id
+```
+
+To see the tags for a resource group, use:
+
+```azurepowershell-interactive
+$resourceGroup = Get-AzResourceGroup -Name demoGroup
+Get-AzTag -ResourceId $resourceGroup.ResourceId
+```
+
+To see the tags for a subscription, use:
+
+```azurepowershell-interactive
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+Get-AzTag -ResourceId "/subscriptions/$subscription"
+```
+
+## List by tag
+
+To get resources that have a specific tag name and value, use:
+
+```azurepowershell-interactive
+(Get-AzResource -Tag @{ "CostCenter"="00123"}).Name
+```
+
+To get resources that have a specific tag name with any tag value, use:
+
+```azurepowershell-interactive
+(Get-AzResource -TagName "Dept").Name
+```
+
+To get resource groups that have a specific tag name and value, use:
+
+```azurepowershell-interactive
+(Get-AzResourceGroup -Tag @{ "CostCenter"="00123" }).ResourceGroupName
+```
+
+## Remove tags
+
+To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
+
+```azurepowershell-interactive
+$removeTags = @{"Project"="ECommerce"; "Team"="Web"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $removeTags -Operation Delete
+```
+
+The specified tags are removed.
+
+```output
+Properties :
+ Name Value
+ ========== =====
+ CostCenter 00123
+```
+
+To remove all tags, use the [Remove-AzTag](/powershell/module/az.resources/remove-aztag) command.
+
+```azurepowershell-interactive
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+Remove-AzTag -ResourceId "/subscriptions/$subscription"
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-python.md
+
+ Title: Tag resources, resource groups, and subscriptions with Python
+description: Shows how to use Python to apply tags to Azure resources.
+ Last updated : 04/19/2023+++
+# Apply tags with Python
+
+This article describes how to use Python to tag resources, resource groups, and subscriptions. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
++
+## Prerequisites
+
+* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+
+* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}`
+ * azure-identity
+ * azure-mgmt-resource
+
+ If you have older versions of these packages already installed in your virtual environment, you may need to update them with `pip install --upgrade {package-name}`
+
+* The examples in this article use CLI-based authentication (`AzureCliCredential`). Depending on your environment, you may need to run `az login` first to authenticate.
+
+* An environment variable with your Azure subscription ID. To get your Azure subscription ID, use:
+
+ ```azurecli-interactive
+ az account show --name 'your subscription name' --query id -o tsv
+ ```
+
+ To set the value, use the option for your environment.
+
+ #### [Windows](#tab/windows)
+
+ ```console
+ setx AZURE_SUBSCRIPTION_ID your-subscription-id
+ ```
+
+ > [!NOTE]
+ > If you only need to access the environment variable in the current running console, you can set the environment variable with `set` instead of `setx`.
+
+ After you add the environment variables, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before running the example.
+
+ #### [Linux](#tab/linux)
+
+ ```bash
+ export AZURE_SUBSCRIPTION_ID=your-subscription-id
+ ```
+
+ After you add the environment variables, run `source ~/.bashrc` from your console window to make the changes effective.
+
+ #### [macOS](#tab/macos)
+
+ ##### Bash
+
+ Edit your .bash_profile, and add the environment variables:
+
+ ```bash
+ export AZURE_SUBSCRIPTION_ID=your-subscription-id
+ ```
+
+ After you add the environment variables, run `source ~/.bash_profile` from your console window to make the changes effective.
+
+## Apply tags
+
+Azure Python offers the [ResourceManagementClient.tags.begin_create_or_update_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-begin-create-or-update-at-scope) method to apply tags. It replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
+
+The following example applies a set of tags to a storage account:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Dept": "Finance",
+ "Status": "Normal"
+}
+
+tag_resource = TagsResource(
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_create_or_update_at_scope(resource.id, tag_resource)
+
+print(f"Tags {tag_resource.properties.tags} were added to resource with ID: {resource.id}")
+```
+
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
+
+```python
+tags = {
+ "Team": "Compliance",
+ "Environment": "Production"
+}
+```
+
+To add tags to a resource that already has tags, use [ResourceManagementClient.tags.begin_update_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-begin-update-at-scope). On the [TagsPatchResource](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.models.tagspatchresource) object, set the `operation` parameter to `Merge`.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Dept": "Finance",
+ "Status": "Normal"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to existing tags on resource with ID: {resource.id}")
+```
+
+Notice that the existing tags grow with the addition of the two new tags.
+
+Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Status": "Green"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to existing tags on resource with ID: {resource.id}")
+```
+
+When you set the `operation` parameter to `Replace`, the new set of tags replaces the existing tags.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Project": "ECommerce",
+ "CostCenter": "00123",
+ "Team": "Web"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Replace",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} replaced tags on resource with ID: {resource.id}")
+```
+
+Only the new tags remain on the resource.
+
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag. To add a new set of tags to a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+
+tags = {
+ "Dept": "Finance",
+ "Status": "Normal"
+}
+
+tag_resource = TagsResource(
+ properties={'tags': tags}
+)
+
+resource_group = resource_client.resource_groups.get(resource_group_name)
+
+resource_client.tags.begin_create_or_update_at_scope(resource_group.id, tag_resource)
+
+print(f"Tags {tag_resource.properties.tags} were added to resource group: {resource_group.id}")
+```
+
+To update the tags for a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+
+tags = {
+ "CostCenter": "00123",
+ "Environment": "Production"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource_group = resource_client.resource_groups.get(resource_group_name)
+
+resource_client.tags.begin_update_at_scope(resource_group.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to existing tags on resource group: {resource_group.id}")
+```
+
+To update the tags for a subscription, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+tags = {
+ "Team": "Web Apps"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource_client.tags.begin_update_at_scope(f"/subscriptions/{subscription_id}", tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to subscription: {subscription_id}")
+
+```
+
+You may have more than one resource with the same name in a resource group. In that case, you can set each resource with the following commands:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+
+tags = {
+ "Dept": "IT",
+ "Environment": "Test"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resources = resource_client.resources.list_by_resource_group(resource_group_name, filter="name eq 'sqlDatabase1'")
+
+for resource in resources:
+ resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+ print(f"Tags {tag_patch_resource.properties.tags} were added to resource: {resource.id}")
+```
+
+## List tags
+
+To get the tags for a resource, resource group, or subscription, use the [ResourceManagementClient.tags.get_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-get-at-scope) method and pass the resource ID of the entity.
+
+To see the tags for a resource, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostorage"
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_tags = resource_client.tags.get_at_scope(resource.id)
+print (resource_tags.properties.tags)
+```
+
+To see the tags for a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group = resource_client.resource_groups.get("demoGroup")
+
+resource_group_tags = resource_client.tags.get_at_scope(resource_group.id)
+print (resource_group_tags.properties.tags)
+```
+
+To see the tags for a subscription, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+subscription_tags = resource_client.tags.get_at_scope(f"/subscriptions/{subscription_id}")
+print (subscription_tags.properties.tags)
+```
+
+## List by tag
+
+To get resources that have a specific tag name and value, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resources = resource_client.resources.list(filter="tagName eq 'CostCenter' and tagValue eq '00123'")
+
+for resource in resources:
+ print(resource.name)
+```
+
+To get resources that have a specific tag name with any tag value, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resources = resource_client.resources.list(filter="tagName eq 'Dept'")
+
+for resource in resources:
+ print(resource.name)
+```
+
+To get resource groups that have a specific tag name and value, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_groups = resource_client.resource_groups.list(filter="tagName eq 'CostCenter' and tagValue eq '00123'")
+
+for resource_group in resource_groups:
+ print(resource_group.name)
+```
+
+## Remove tags
+
+To remove specific tags, set `operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Dept": "IT",
+ "Environment": "Test"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Delete",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were removed from resource: {resource.id}")
+```
+
+The specified tags are removed.
+
+To remove all tags, use the [ResourceManagementClient.tags.begin_delete_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-begin-delete-at-scope) method.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+subscription = resource_client.subscriptions.get(subscription_id)
+
+resource_client.tags.begin_delete_at_scope(subscription.id)
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-templates.md
+
+ Title: Tag resources, resource groups, and subscriptions with ARM templates
+description: Shows how to use ARM templates to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with ARM templates
+
+This article describes how to use Azure Resource Manager templates (ARM templates) to tag resources, resource groups, and subscriptions during deployment. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+> [!NOTE]
+> The tags you apply through an ARM template or Bicep file overwrite any existing tags.
+
+## Apply values
+
+The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "utcShort": {
+ "type": "string",
+ "defaultValue": "[utcNow('d')]"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "LastDeployed": "[parameters('utcShort')]"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply an object
+
+You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "tagValues": {
+ "type": "object",
+ "defaultValue": {
+ "Dept": "Finance",
+ "Environment": "Production"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": "[parameters('tagValues')]",
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply a JSON string
+
+To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply tags from resource group
+
+To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "Dept": "[resourceGroup().tags['Dept']]",
+ "Environment": "[resourceGroup().tags['Environment']]"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply tags to resource groups or subscriptions
+
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "tagName": {
+ "type": "string",
+ "defaultValue": "TeamName"
+ },
+ "tagValue": {
+ "type": "string",
+ "defaultValue": "AppTeam1"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/tags",
+ "name": "default",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "tags": {
+ "[parameters('tagName')]": "[parameters('tagValue')]"
+ }
+ }
+ }
+ ]
+}
+```
+
+To apply the tags to a resource group, use either Azure PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
+
+```azurepowershell-interactive
+New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+```azurecli-interactive
+az deployment group create --resource-group exampleGroup --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+To apply the tags to a subscription, use either PowerShell or Azure CLI. Deploy to the subscription that you want to tag.
+
+```azurepowershell-interactive
+New-AzSubscriptionDeployment -name tagresourcegroup -Location westus2 -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+```azurecli-interactive
+az deployment sub create --name tagresourcegroup --location westus2 --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+For more information about subscription deployments, see [Create resource groups and resources at the subscription level](../templates/deploy-to-subscription.md).
+
+The following template adds the tags from an object to either a resource group or subscription.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "tags": {
+ "type": "object",
+ "defaultValue": {
+ "TeamName": "AppTeam1",
+ "Dept": "Finance",
+ "Environment": "Production"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/tags",
+ "apiVersion": "2021-04-01",
+ "name": "default",
+ "properties": {
+ "tags": "[parameters('tags')]"
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization
-description: Shows how to apply tags to organize Azure resources for billing and managing.
+description: Describes the conditions and limitations for using tags with Azure resources.
Previously updated : 05/25/2022- Last updated : 04/19/2023 # Use tags to organize your Azure resources and management hierarchy
-Tags are metadata elements that you apply to your Azure resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named Environment. To identify the resources deployed to production, give them a value of Production. Fully formed, the key-value pair becomes, Environment = Production.
+Tags are metadata elements that you apply to your Azure resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named `Environment`. To identify the resources deployed to production, give them a value of `Production`. The fully-formed key-value pair is `Environment = Production`.
+
+This article describes the conditions and limitations for using tags. For steps on how to work with tags, see:
+
+* [Portal](tag-resources-portal.md)
+* [Azure CLI](tag-resources-cli.md)
+* [Azure PowerShell](tag-resources-powershell.md)
+* [Python](tag-resources-python.md)
+* [ARM templates](tag-resources-templates.md)
+* [Bicep](tag-resources-bicep.md)
+
+## Tag usage and recommendations
You can apply tags to your Azure resources, resource groups, and subscriptions.
There are two ways to get the required access to tag resources.
- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. To apply tags to virtual machines, for example, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
-## PowerShell
-
-### Apply tags
-
-Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
-
-The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
-
-The following example applies a set of tags to a storage account:
-
-```azurepowershell-interactive
-$tags = @{"Dept"="Finance"; "Status"="Normal"}
-$resource = Get-AzResource -Name demoStorage -ResourceGroup demoGroup
-New-AzTag -ResourceId $resource.id -Tag $tags
-```
-
-When the command completes, notice that the resource has two tags.
-
-```output
-Properties :
- Name Value
- ====== =======
- Dept Finance
- Status Normal
-```
-
-If you run the command again, but this time with different tags, notice that the earlier tags disappear.
-
-```azurepowershell-interactive
-$tags = @{"Team"="Compliance"; "Environment"="Production"}
-New-AzTag -ResourceId $resource.id -Tag $tags
-```
-
-```output
-Properties :
- Name Value
- =========== ==========
- Environment Production
- Team Compliance
-```
-
-To add tags to a resource that already has tags, use `Update-AzTag`. Set the `-Operation` parameter to `Merge`.
-
-```azurepowershell-interactive
-$tags = @{"Dept"="Finance"; "Status"="Normal"}
-Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
-```
-
-Notice that the existing tags grow with the addition of the two new tags.
-
-```output
-Properties :
- Name Value
- =========== ==========
- Status Normal
- Dept Finance
- Team Compliance
- Environment Production
-```
-
-Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
-
-```azurepowershell-interactive
-$tags = @{"Status"="Green"}
-Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
-```
-
-```output
-Properties :
- Name Value
- =========== ==========
- Status Green
- Dept Finance
- Team Compliance
- Environment Production
-```
-
-When you set the `-Operation` parameter to `Replace`, the new set of tags replaces the existing tags.
-
-```azurepowershell-interactive
-$tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
-Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Replace
-```
-
-Only the new tags remain on the resource.
-
-```output
-Properties :
- Name Value
- ========== =========
- CostCenter 00123
- Team Web
- Project ECommerce
-```
-
-The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
-
-To add a new set of tags to a resource group, use:
-
-```azurepowershell-interactive
-$tags = @{"Dept"="Finance"; "Status"="Normal"}
-$resourceGroup = Get-AzResourceGroup -Name demoGroup
-New-AzTag -ResourceId $resourceGroup.ResourceId -tag $tags
-```
-
-To update the tags for a resource group, use:
-
-```azurepowershell-interactive
-$tags = @{"CostCenter"="00123"; "Environment"="Production"}
-$resourceGroup = Get-AzResourceGroup -Name demoGroup
-Update-AzTag -ResourceId $resourceGroup.ResourceId -Tag $tags -Operation Merge
-```
-
-To add a new set of tags to a subscription, use:
-
-```azurepowershell-interactive
-$tags = @{"CostCenter"="00123"; "Environment"="Dev"}
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-New-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags
-```
-
-To update the tags for a subscription, use:
-
-```azurepowershell-interactive
-$tags = @{"Team"="Web Apps"}
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-Update-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags -Operation Merge
-```
-
-You may have more than one resource with the same name in a resource group. In that case, you can set each resource with the following commands:
-
-```azurepowershell-interactive
-$resource = Get-AzResource -ResourceName sqlDatabase1 -ResourceGroupName examplegroup
-$resource | ForEach-Object { Update-AzTag -Tag @{ "Dept"="IT"; "Environment"="Test" } -ResourceId $_.ResourceId -Operation Merge }
-```
-
-### List tags
-
-To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass the resource ID of the entity.
-
-To see the tags for a resource, use:
-
-```azurepowershell-interactive
-$resource = Get-AzResource -Name demoStorage -ResourceGroup demoGroup
-Get-AzTag -ResourceId $resource.id
-```
-
-To see the tags for a resource group, use:
-
-```azurepowershell-interactive
-$resourceGroup = Get-AzResourceGroup -Name demoGroup
-Get-AzTag -ResourceId $resourceGroup.ResourceId
-```
-
-To see the tags for a subscription, use:
-
-```azurepowershell-interactive
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-Get-AzTag -ResourceId "/subscriptions/$subscription"
-```
-
-### List by tag
-
-To get resources that have a specific tag name and value, use:
-
-```azurepowershell-interactive
-(Get-AzResource -Tag @{ "CostCenter"="00123"}).Name
-```
-
-To get resources that have a specific tag name with any tag value, use:
-
-```azurepowershell-interactive
-(Get-AzResource -TagName "Dept").Name
-```
-
-To get resource groups that have a specific tag name and value, use:
-
-```azurepowershell-interactive
-(Get-AzResourceGroup -Tag @{ "CostCenter"="00123" }).ResourceGroupName
-```
-
-### Remove tags
-
-To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
-
-```azurepowershell-interactive
-$removeTags = @{"Project"="ECommerce"; "Team"="Web"}
-Update-AzTag -ResourceId $resource.id -Tag $removeTags -Operation Delete
-```
-
-The specified tags are removed.
-
-```output
-Properties :
- Name Value
- ========== =====
- CostCenter 00123
-```
-
-To remove all tags, use the [Remove-AzTag](/powershell/module/az.resources/remove-aztag) command.
-
-```azurepowershell-interactive
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-Remove-AzTag -ResourceId "/subscriptions/$subscription"
-```
-
-## Azure CLI
-
-### Apply tags
-
-Azure CLI offers two commands to apply tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You need to have the Azure CLI 2.10.0 version or later. You can check your version with `az version`. To update or install it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-The `az tag create` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
-
-The following example applies a set of tags to a storage account:
-
-```azurecli-interactive
-resource=$(az resource show -g demoGroup -n demoStorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
-az tag create --resource-id $resource --tags Dept=Finance Status=Normal
-```
-
-When the command completes, notice that the resource has two tags.
-
-```output
-"properties": {
- "tags": {
- "Dept": "Finance",
- "Status": "Normal"
- }
-},
-```
-
-If you run the command again, but this time with different tags, notice that the earlier tags disappear.
-
-```azurecli-interactive
-az tag create --resource-id $resource --tags Team=Compliance Environment=Production
-```
-
-```output
-"properties": {
- "tags": {
- "Environment": "Production",
- "Team": "Compliance"
- }
-},
-```
-
-To add tags to a resource that already has tags, use `az tag update`. Set the `--operation` parameter to `Merge`.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Merge --tags Dept=Finance Status=Normal
-```
-
-Notice that the existing tags grow with the addition of the two new tags.
-
-```output
-"properties": {
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "Status": "Normal",
- "Team": "Compliance"
- }
-},
-```
-
-Each tag name can have only one value. If you provide a new value for a tag, the new tag replaces the old value, even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Merge --tags Status=Green
-```
-
-```output
-"properties": {
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "Status": "Green",
- "Team": "Compliance"
- }
-},
-```
-
-When you set the `--operation` parameter to `Replace`, the new set of tags replaces the existing tags.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Replace --tags Project=ECommerce CostCenter=00123 Team=Web
-```
-
-Only the new tags remain on the resource.
-
-```output
-"properties": {
- "tags": {
- "CostCenter": "00123",
- "Project": "ECommerce",
- "Team": "Web"
- }
-},
-```
-
-The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
-
-To add a new set of tags to a resource group, use:
-
-```azurecli-interactive
-group=$(az group show -n demoGroup --query id --output tsv)
-az tag create --resource-id $group --tags Dept=Finance Status=Normal
-```
-
-To update the tags for a resource group, use:
-
-```azurecli-interactive
-az tag update --resource-id $group --operation Merge --tags CostCenter=00123 Environment=Production
-```
-
-To add a new set of tags to a subscription, use:
-
-```azurecli-interactive
-sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
-az tag create --resource-id /subscriptions/$sub --tags CostCenter=00123 Environment=Dev
-```
-
-To update the tags for a subscription, use:
-
-```azurecli-interactive
-az tag update --resource-id /subscriptions/$sub --operation Merge --tags Team="Web Apps"
-```
-
-### List tags
-
-To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass the resource ID of the entity.
-
-To see the tags for a resource, use:
-
-```azurecli-interactive
-resource=$(az resource show -g demoGroup -n demoStorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
-az tag list --resource-id $resource
-```
-
-To see the tags for a resource group, use:
-
-```azurecli-interactive
-group=$(az group show -n demoGroup --query id --output tsv)
-az tag list --resource-id $group
-```
-
-To see the tags for a subscription, use:
-
-```azurecli-interactive
-sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
-az tag list --resource-id /subscriptions/$sub
-```
-
-### List by tag
-
-To get resources that have a specific tag name and value, use:
-
-```azurecli-interactive
-az resource list --tag CostCenter=00123 --query [].name
-```
-
-To get resources that have a specific tag name with any tag value, use:
-
-```azurecli-interactive
-az resource list --tag Team --query [].name
-```
-
-To get resource groups that have a specific tag name and value, use:
-
-```azurecli-interactive
-az group list --tag Dept=Finance
-```
-
-### Remove tags
-
-To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass the resource ID of the tags you want to delete.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Delete --tags Project=ECommerce Team=Web
-```
-
-You've removed the specified tags.
-
-```output
-"properties": {
- "tags": {
- "CostCenter": "00123"
- }
-},
-```
-
-To remove all tags, use the [az tag delete](/cli/azure/tag#az-tag-delete) command.
-
-```azurecli-interactive
-az tag delete --resource-id $resource
-```
-
-### Handling spaces
-
-If your tag names or values include spaces, enclose them in quotation marks.
-
-```azurecli-interactive
-az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US"
-```
-
-## ARM templates
-
-You can tag resources, resource groups, and subscriptions during deployment with an ARM template.
-
-> [!NOTE]
-> The tags you apply through an ARM template or Bicep file overwrite any existing tags.
-
-### Apply values
-
-The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "utcShort": {
- "type": "string",
- "defaultValue": "[utcNow('d')]"
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "LastDeployed": "[parameters('utcShort')]"
- },
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-param utcShort string = utcNow('d')
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {
- Dept: 'Finance'
- Environment: 'Production'
- LastDeployed: utcShort
- }
-}
-```
---
-### Apply an object
-
-You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- },
- "tagValues": {
- "type": "object",
- "defaultValue": {
- "Dept": "Finance",
- "Environment": "Production"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": "[parameters('tagValues')]",
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-param tagValues object = {
- Dept: 'Finance'
- Environment: 'Production'
-}
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: tagValues
-}
-```
---
-### Apply a JSON string
-
-To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": {
- "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
- },
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {
- CostCenter: '{"Dept":"Finance","Environment":"Production"}'
- }
-}
-```
---
-### Apply tags from resource group
-
-To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": {
- "Dept": "[resourceGroup().tags['Dept']]",
- "Environment": "[resourceGroup().tags['Environment']]"
- },
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {
- Dept: resourceGroup().tags['Dept']
- Environment: resourceGroup().tags['Environment']
- }
-}
-```
---
-### Apply tags to resource groups or subscriptions
-
-You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "tagName": {
- "type": "string",
- "defaultValue": "TeamName"
- },
- "tagValue": {
- "type": "string",
- "defaultValue": "AppTeam1"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Resources/tags",
- "name": "default",
- "apiVersion": "2021-04-01",
- "properties": {
- "tags": {
- "[parameters('tagName')]": "[parameters('tagValue')]"
- }
- }
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param tagName string = 'TeamName'
-param tagValue string = 'AppTeam1'
-
-resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
- name: 'default'
- properties: {
- tags: {
- '${tagName}': tagValue
- }
- }
-}
-```
---
-To apply the tags to a resource group, use either Azure PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
-
-```azurepowershell-interactive
-New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-```azurecli-interactive
-az deployment group create --resource-group exampleGroup --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-To apply the tags to a subscription, use either PowerShell or Azure CLI. Deploy to the subscription that you want to tag.
-
-```azurepowershell-interactive
-New-AzSubscriptionDeployment -name tagresourcegroup -Location westus2 -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-```azurecli-interactive
-az deployment sub create --name tagresourcegroup --location westus2 --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-For more information about subscription deployments, see [Create resource groups and resources at the subscription level](../templates/deploy-to-subscription.md).
-
-The following template adds the tags from an object to either a resource group or subscription.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "tags": {
- "type": "object",
- "defaultValue": {
- "TeamName": "AppTeam1",
- "Dept": "Finance",
- "Environment": "Production"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Resources/tags",
- "apiVersion": "2021-04-01",
- "name": "default",
- "properties": {
- "tags": "[parameters('tags')]"
- }
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-targetScope = 'subscription'
-
-param tagObject object = {
- TeamName: 'AppTeam1'
- Dept: 'Finance'
- Environment: 'Production'
-}
-
-resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
- name: 'default'
- properties: {
- tags: tagObject
- }
-}
-```
---
-## Portal
--
-## REST API
-
-To work with tags through the Azure REST API, use:
-
-* [Tags - Create Or Update At Scope](/rest/api/resources/tags/createorupdateatscope) (PUT operation)
-* [Tags - Update At Scope](/rest/api/resources/tags/updateatscope) (PATCH operation)
-* [Tags - Get At Scope](/rest/api/resources/tags/getatscope) (GET operation)
-* [Tags - Delete At Scope](/rest/api/resources/tags/deleteatscope) (DELETE operation)
-
-## SDKs
-
-For examples of applying tags with SDKs, see:
-
-* [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/resourcemanager/Azure.ResourceManager/samples/Sample2_ManagingResourceGroups.md)
-* [Java](https://github.com/Azure-Samples/resources-java-manage-resource-group/blob/master/src/main/java/com/azure/resourcemanager/resources/samples/ManageResourceGroup.java)
-* [JavaScript](https://github.com/Azure-Samples/azure-sdk-for-js-samples/blob/main/samples/resources/resources_example.ts)
-* [Python](https://github.com/MicrosoftDocs/samples/tree/main/Azure-Samples/azure-samples-python-management/resources)
- ## Inherit tags Resources don't inherit the tags you apply to a resource group or a subscription. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
The following limitations apply to tags:
* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md). * For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For steps on how to work with tags, see:
+
+ * [Portal](tag-resources-portal.md)
+ * [Azure CLI](tag-resources-cli.md)
+ * [Azure PowerShell](tag-resources-powershell.md)
+ * [Python](tag-resources-python.md)
+ * [ARM templates](tag-resources-templates.md)
+ * [Bicep](tag-resources-bicep.md)
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-declaration.md
For more information, see [Set resource location in ARM template](resource-locat
## Set tags
-You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources.md#arm-templates).
+You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources-templates.md).
## Set resource-specific properties
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
A common use of the resourceGroup function is to create resources in the same lo
} ```
-You can also use the `resourceGroup` function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources.md#apply-tags-from-resource-group).
+You can also use the `resourceGroup` function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources-templates.md#apply-tags-from-resource-group).
When using nested templates to deploy to multiple resource groups, you can specify the scope for evaluating the `resourceGroup` function. For more information, see [Deploy Azure resources to more than one subscription or resource group](./deploy-to-resource-group.md).
azure-sql-edge Data Retention Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-cleanup.md
Data Retention can enabled on the database and any of the underlying tables individually, allowing users to create flexible aging policies for their tables and databases. Applying data retention is simple: it requires only one parameter to be set during table creation or as part of an alter table operation.
-After data retention policy is defiend for a database and the underlying table, a background time timer task runs to remove any obsolete records from the table enabled for data retention. Identification of matching rows and their removal from the table occur transparently, in the background task that is scheduled and run by the system. Age condition for the table rows is checked based on the column used as the `filter_column` in the table definition. If retention period, for example, is set to one week, table rows eligible for cleanup satisfy either of the following condition:
+After data retention policy is defined for a database and the underlying table, a background timer task runs to remove any obsolete records from the table enabled for data retention. Identification of matching rows and their removal from the table occur transparently, in the background task that is scheduled and run by the system. Age condition for the table rows is checked based on the column used as the `filter_column` in the table definition. If retention period, for example, is set to one week, table rows eligible for cleanup satisfy either of the following condition:
- If the filter column uses DATETIMEOFFSET data type then the condition is `filter_column < DATEADD(WEEK, -1, SYSUTCDATETIME())` - Else then the condition is `filter_column < DATEADD(WEEK, -1, SYSDATETIME())`
Data retention cleanup operation comprises of two phases.
- Discovery Phase - In this phase the cleanup operation identifies all the tables within the user databases to build a list for cleanup. Discovery runs once a day. - Cleanup Phase - In this phase, cleanup is run against all tables with finite data retention, identified in the discovery phase. If the cleanup operation cannot be performed on a table, then that table is skipped in the current run and will be retried in the next iteration. The following principles are used during cleanup - If an obsolete row is locked by another transaction, that row is skipped.
- - Clean up runs with a default 5 seconds lock timeout setting. If the locks cannot be acquired on the tables within the timeout window, the table is skipped in the current run and will be retried in the next iteration.
+ - Cleanup runs with a default 5 seconds lock timeout setting. If the locks cannot be acquired on the tables within the timeout window, the table is skipped in the current run and will be retried in the next iteration.
- If there is an error during cleanup of a table, that table is skipped and will be picked up in the next iteration. ## Manual cleanup
Additionally, a new ring buffer type named `RING_BUFFER_DATA_RETENTION_CLEANUP`
## Next Steps - [Data Retention Policy](data-retention-overview.md)-- [Enable and Disable Data Retention Policies](data-retention-enable-disable.md)
+- [Enable and Disable Data Retention Policies](data-retention-enable-disable.md)
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 4/6/2023 Last updated : 4/20/2023 # Known issues: Azure VMware Solution
Refer to the table below to find details about resolution dates or possible work
| :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
-| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
+| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
| When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 3/16/2023 Last updated : 4/20/2023 # What's new in Azure VMware Solution
Microsoft will regularly apply important updates to the Azure VMware Solution fo
## April 2023
-**HCX Run commands**
+**VMware HCX Run Commands**
-Introducing run commands for HCX on Azure VMware solutions. You can use these run commands to restart HCX cloud manager in your Azure VMware solution private cloud. Additionally, you can also scale HCX cloud manager using run commands. To learn how to use run commands for HCX, see [Use HCX Run commands](use-hcx-run-commands.md).
+Introducing Run Commands for VMware HCX on Azure VMware Solution. You can use these run commands to restart VMware HCX Cloud Manager in your Azure VMware Solution private cloud. Additionally, you can also scale VMware HCX Cloud Manager using Run Commands. To learn how to use run commands for VMware HCX, see [Use VMware HCX Run commands](use-hcx-run-commands.md).
## February 2023
The data in Azure Log Analytics offer insights into issues by searching using Ku
**New SKU availability - AV36P and AV52 nodes**
-The AV36P is now available in the West US Region.ΓÇ» This node size is used for memory and storage workloads by offering increased Memory and NVME based SSDs.ΓÇ»
+The AV36P is now available in the West US Region. This node size is used for memory and storage workloads by offering increased Memory and NVME based SSDs.ΓÇ»
AV52 is now available in the East US 2 Region. This node size is used for intensive workloads with higher physical core count, additional memory, and larger capacity NVME based SSDs. **Customer-managed keys using Azure Key Vault**
-You can use customer-managed keys to bring and manage your master encryption keys to encrypt van. Azure Key Vault allows you to store your privately managed keys securely to access your Azure VMware Solution data.
+You can use customer-managed keys to bring and manage your master encryption keys to encrypt vSAN. Azure Key Vault allows you to store your privately managed keys securely to access your Azure VMware Solution data.
**Azure NetApp Files - more storage options available**
For pricing and region availability, see the [Azure VMware Solution pricing page
## July 2022
-HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
-HCX with public IP is especially useful in cases where On-premises sites aren't connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
+VMware HCX Cloud Manager in Azure VMware Solution can now be accessible over a public IP address. You can pair VMware HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
+
+VMware HCX with public IP is especially useful in cases where On-premises sites aren't connected to Azure via ExpressRoute or VPN. VMware HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable VMware HCX over the internet](./enable-hcx-access-over-internet.md)
All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
Any existing private clouds in the above mentioned regions will also be upgraded
## May 2022
-All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
No further action is required.
## December 2021
-Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
+Azure VMware Solution has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these Azure VMware Solution managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
- We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
+We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
- If you need any assistance or have questions, [contact us](https://portal.azure.com/#home).
+If you need any assistance or have questions, [contact us](https://portal.azure.com/#home).
-VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We're addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available.
+VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We're addressing this issue by applying VMware recommended workarounds or patches for Azure VMware Solution managed VMware components as they become available.
- Note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for other VMware products you may have deployed in Azure VMware Solution. If you need any assistance or have questions, [contact us](https://portal.azure.com).
+Note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for other VMware products you may have deployed in Azure VMware Solution. If you need any assistance or have questions, [contact us](https://portal.azure.com).
## November 2021
No further action is required.
Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware. To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o. For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html). No further action is required.
-All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523). ESXi hosts in existing private clouds have been patched to this version. For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
+All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523). ESXi hosts in existing private clouds have been patched to this version. For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
## July 2021
-All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
+All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version 3.1.1. NSX-T Data Center version in existing private clouds will be upgraded through September 2021 to NSX-T Data Center 3.1.1 release.
You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
azure-vmware Use Hcx Run Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/use-hcx-run-commands.md
Title: Use HCX Run Commands
-description: Use HCX Run Commands in Azure VMware Solution
+ Title: Use VMware HCX Run Commands
+description: Use VMware HCX Run Commands in Azure VMware Solution
Previously updated : 04/11/2023 Last updated : 04/20/2023
-# Use HCX Run Commands
-In this article, you learn how to use HCX run commands. Use run commands to perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets. This document outlines the available HCX run commands and how to use them.
+# Use VMware HCX Run Commands
+In this article, you learn how to use VMware HCX Run Commands. Use run commands to perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets. This document outlines the available VMware HCX Run Commands and how to use them.
-This article describes two HCX commands: **Restart HCX Manager** and **Scale HCX Manager**.
+This article describes two VMware HCX commands: **Restart HCX Manager** and **Scale HCX Manager**.
-## Restart HCX Manager
+## Restart VMware HCX Manager
-This Command checks for active HCX migrations and replications. If none are found, it restarts the HCX cloud manager (HCX VM's guest OS).
+This Command checks for active VMware HCX migrations and replications. If none are found, it restarts the VMware HCX Cloud Manager (VMware HCX VM's guest OS).
-1. Navigate to the run Command panel in an Azure VMware private cloud on the Azure portal.
+1. Navigate to the Run Command panel in an Azure VMware Solution private cloud on the Azure portal.
:::image type="content" source="media/hcx-commands/run-command-private-cloud.png" alt-text="Diagram that lists all available Run command packages and Run commands." border="false" lightbox="media/hcx-commands/run-command-private-cloud.png":::
Optional run command parameters.
**Force Parameter** - If there are ANY active HCX migrations/replications, this parameter avoids the check for active HCX migrations/replications. If the Virtual machine is in a powered off state, this parameter powers the machine on. **Scenario 1**: A customer has a migration that has been stuck in an active state for weeks and they need a restart of HCX for a separate issue. Without this parameter, the script will fail due to the detection of the active migration.
- **Scenario 2**: The HCX Manager is powered off and the customer would like to power it back on.
+ **Scenario 2**: The VMware HCX Cloud Manager is powered off and the customer would like to power it back on.
:::image type="content" source="media/hcx-commands/restart-command.png" alt-text="Diagram that shows run command parameters for Restart-HcxManager command." border="false" lightbox="media/hcx-commands/restart-command.png":::
-1. Wait for command to finish. It may take few minutes for the HCX appliance to come online.
+1. Wait for command to finish. It may take few minutes for the VMware HCX appliance to come online.
-## Scale HCX manager
-Use the Scale HCX manager run command to increase the resource allocation of your HCX Manager virtual machine to 8 vCPUs and 24-GB RAM from the default setting of 4 vCPUs and 12-GB RAM, ensuring scalability.
+## Scale VMware HCX manager
+Use the Scale VMware HCX Cloud Manager Run Command to increase the resource allocation of your VMware HCX Cloud Manager virtual machine to 8 vCPUs and 24-GB RAM from the default setting of 4 vCPUs and 12-GB RAM, ensuring scalability.
-**Scenario**: Mobility Optimize Networking (MON) requires HCX Scalability. For more details on [MON scaling](https://kb.vmware.com/s/article/88401)ΓÇ»
+**Scenario**: Mobility Optimize Networking (MON) requires VMware HCX Scalability. For more details on [MON scaling](https://kb.vmware.com/s/article/88401)ΓÇ»
>[!NOTE]
-> HCX cloud manager will be rebooted during this operation, and this may affect any ongoing migration processes.
+> VMware HCX Cloud Manager will be rebooted during this operation, and this may affect any ongoing migration processes.
-1. Navigate to the run Command panel on in an AVS private cloud on the Azure portal.
+1. Navigate to the Run Command panel on in an Azure VMware Solution private cloud on the Azure portal.
1. Select the **Microsoft.AVS.Management** package dropdown menu and select the ``Set-HcxScaledCpuAndMemorySetting`` command. :::image type="content" source="media/hcx-commands/set-hcx-scale.png" alt-text="Diagram that shows run command parameters for Set-HcxScaledCpuAndMemorySetting command." border="false" lightbox="media/hcx-commands/set-hcx-scale.png":::
-1. Agree to restart HCX by toggling ``AgreeToRestartHCX`` to **True**.
+1. Agree to restart VMware HCX by toggling ``AgreeToRestartHCX`` to **True**.
You must acknowledge that the virtual machine will be restarted.
Use the Scale HCX manager run command to increase the resource allocation of you
This process may take between 10-15 minutes. >[!NOTE]
- > HCX cloud manager will be unavailable during the scaling.
+ > VMware HCX cloud manager will be unavailable during the scaling.
## Next step
-To learn more about run commands, see [Run commands](concepts-run-command.md)
+To learn more about Run Commands, see [Run Commands](concepts-run-command.md)
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
Title: Quickstart - Run your first Batch job with the Azure CLI
-description: This quickstart shows how to create a Batch account and run a Batch job with the Azure CLI.
+ Title: 'Quickstart: Use the Azure CLI to create a Batch account and run a job'
+description: Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.
Previously updated : 05/25/2021 Last updated : 04/12/2023
-# Quickstart: Run your first Batch job with the Azure CLI
+# Quickstart: Use the Azure CLI to create a Batch account and run a job
-Get started with Azure Batch by using the Azure CLI to create a Batch account, a pool of compute nodes (virtual machines), and a job that runs tasks on the pool. Each sample task runs a basic command on one of the pool nodes.
+This quickstart shows you how to get started with Azure Batch by using Azure CLI commands and scripts to create and manage Batch resources. You create a Batch account that has a pool of virtual machines, or compute nodes. You then create and run a job with tasks that run on the pool nodes.
-The Azure CLI is used to create and manage Azure resources from the command line or in scripts. After completing this quickstart, you will understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-- This quickstart requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- Azure Cloud Shell or Azure CLI.
-## Create a resource group
+ You can run the Azure CLI commands in this quickstart interactively in Azure Cloud Shell. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also [run Cloud Shell from within the Azure portal](https://shell.azure.com). Cloud Shell always uses the latest version of the Azure CLI.
+
+ Alternatively, you can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. The steps in this article require Azure CLI version 2.0.20 or later. Run [az version](/cli/azure/reference-index?#az-version) to see your installed version and dependent libraries, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade. If you use a local installation, sign in to Azure by using the [az login](/cli/azure/reference-index#az-login) command.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+>[!NOTE]
+>For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
+
+## Create a resource group
-The following example creates a resource group named *QuickstartBatch-rg* in the *eastus2* location.
+Run the following [az group create](/cli/azure/group#az-group-create) command to create an Azure resource group named `qsBatch` in the `eastus2` Azure region. The resource group is a logical container that holds the Azure resources for this quickstart.
```azurecli-interactive az group create \
- --name QuickstartBatch-rg \
+ --name qsBatch \
--location eastus2 ``` ## Create a storage account
-You can link an Azure Storage account with your Batch account. Although not required for this quickstart, the storage account is useful to deploy applications and store input and output data for most real-world workloads. Create a storage account in your resource group with the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command.
+Use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command to create an Azure Storage account to link to your Batch account. Although this quickstart doesn't use the storage account, most real-world Batch workloads use a linked storage account to deploy applications and store input and output data.
+
+Run the following command to create a Standard_LRS SKU storage account named `mybatchstorage` in your resource group:
```azurecli-interactive az storage account create \
- --resource-group QuickstartBatch-rg \
- --name mystorageaccount \
+ --resource-group qsBatch \
+ --name mybatchstorage \
--location eastus2 \ --sku Standard_LRS ``` ## Create a Batch account
-Create a Batch account with the [az batch account create](/cli/azure/batch/account#az-batch-account-create) command. You need an account to create compute resources (pools of compute nodes) and Batch jobs.
-
-The following example creates a Batch account named *mybatchaccount* in *QuickstartBatch-rg*, and links the storage account you created.
+Run the following [az batch account create](/cli/azure/batch/account#az-batch-account-create) command to create a Batch account named `mybatchaccount` in your resource group and link it with the `mybatchstorage` storage account.
```azurecli-interactive az batch account create \ --name mybatchaccount \
- --storage-account mystorageaccount \
- --resource-group QuickstartBatch-rg \
+ --storage-account mybatchstorage \
+ --resource-group qsBatch \
--location eastus2 ```
-To create and manage compute pools and jobs, you need to authenticate with Batch. Log in to the account with the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command. After you log in, your `az batch` commands use this account context.
+Sign in to the new Batch account by running the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command. Once you authenticate your account with Batch, subsequent `az batch` commands in this session use this account context.
```azurecli-interactive az batch account login \ --name mybatchaccount \
- --resource-group QuickstartBatch-rg \
+ --resource-group qsBatch \
--shared-key-auth ``` ## Create a pool of compute nodes
-Now that you have a Batch account, create a sample pool of Linux compute nodes using the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command. The following example creates a pool named *mypool* of two *Standard_A1_v2* nodes running Ubuntu 18.04 LTS. The suggested node size offers a good balance of performance versus cost for this quick example.
+Run the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command to create a pool of Linux compute nodes in your Batch account. The following example creates a pool named `myPool` that consists of two Standard_A1_v2 size VMs running Ubuntu 20.04 LTS OS. This node size offers a good balance of performance versus cost for this quickstart example.
```azurecli-interactive az batch pool create \
- --id mypool --vm-size Standard_A1_v2 \
+ --id myPool \
+ --image canonical:0001-com-ubuntu-server-focal:20_04-lts \
+ --node-agent-sku-id "batch.node.ubuntu 20.04" \
--target-dedicated-nodes 2 \
- --image canonical:ubuntuserver:18.04-LTS \
- --node-agent-sku-id "batch.node.ubuntu 18.04"
+ --vm-size Standard_A1_v2
```
-Batch creates the pool immediately, but it takes a few minutes to allocate and start the compute nodes. During this time, the pool is in the `resizing` state. To see the status of the pool, run the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. This command shows all the properties of the pool, and you can query for specific properties. The following command gets the allocation state of the pool:
+Batch creates the pool immediately, but takes a few minutes to allocate and start the compute nodes. To see the pool status, use the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. This command shows all the properties of the pool, and you can query for specific properties. The following command queries for the pool allocation state:
```azurecli-interactive
-az batch pool show --pool-id mypool \
+az batch pool show --pool-id myPool \
--query "allocationState" ```
-Continue the following steps to create a job and tasks while the pool state is changing. The pool is ready to run tasks when the allocation state is `steady` and all the nodes are running.
+While Batch allocates and starts the nodes, the pool is in the `resizing` state. You can create a job and tasks while the pool state is still `resizing`. The pool is ready to run tasks when the allocation state is `steady` and all the nodes are running.
## Create a job
-Now that you have a pool, create a job to run on it. A Batch job is a logical group for one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. Create a Batch job by using the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command. The following example creates a job *myjob* on the pool *mypool*. Initially the job has no tasks.
+Use the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command to create a Batch job to run on your pool. A Batch job is a logical group of one or more tasks. The job includes settings common to the tasks, such as the pool to run on. The following example creates a job called `myJob` on `myPool` that initially has no tasks.
```azurecli-interactive az batch job create \
- --id myjob \
- --pool-id mypool
+ --id myJob \
+ --pool-id myPool
```
-## Create tasks
+## Create job tasks
-Now use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create some tasks to run in the job. In this example, you create four identical tasks. Each task runs a `command-line` to display the Batch environment variables on a compute node, and then waits 90 seconds. When you use Batch, this command line is where you specify your app or script. Batch provides several ways to deploy apps and scripts to compute nodes.
+Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script.
-The following Bash script creates four parallel tasks (*mytask1* to *mytask4*).
+The following Bash script creates four identical, parallel tasks called `myTask1` through `myTask4`. The task command line displays the Batch environment variables on the compute node, and then waits 90 seconds.
```azurecli-interactive for i in {1..4} do az batch task create \
- --task-id mytask$i \
- --job-id myjob \
+ --task-id myTask$i \
+ --job-id myJob \
--command-line "/bin/bash -c 'printenv | grep AZ_BATCH; sleep 90s'" done ```
-The command output shows settings for each of the tasks. Batch distributes the tasks to the compute nodes.
+The command output shows the settings for each task. Batch distributes the tasks to the compute nodes.
## View task status
-After you create a task, Batch queues it to run on the pool. Once a node is available to run it, the task runs.
+After you create the task, Batch queues the task to run on the pool. Once a node is available, the task runs on the node.
-Use the [az batch task show](/cli/azure/batch/task#az-batch-task-show) command to view the status of the Batch tasks. The following example shows details about *mytask1* running on one of the pool nodes.
+Use the [az batch task show](/cli/azure/batch/task#az-batch-task-show) command to view the status of Batch tasks. The following example shows details about the status of `myTask1`:
```azurecli-interactive az batch task show \
- --job-id myjob \
- --task-id mytask1
+ --job-id myJob \
+ --task-id myTask1
```
-The command output includes many details, but take note of the `exitCode` of the task command line and the `nodeId`. An `exitCode` of 0 indicates that the task command line completed successfully. The `nodeId` indicates the ID of the pool node on which the task ran.
+The command output includes many details. For example, an `exitCode` of `0` indicates that the task command completed successfully. The `nodeId` shows the name of the pool node that ran the task.
## View task output
-To list the files created by a task on a compute node, use the [az batch task file list](/cli/azure/batch/task) command. The following command lists the files created by *mytask1*:
+Use the [az batch task file list](/cli/azure/batch/task#az-batch-task-file-show) command to list the files a task created on a node. The following command lists the files that `myTask1` created:
```azurecli-interactive az batch task file list \
- --job-id myjob \
- --task-id mytask1 \
+ --job-id myJob \
+ --task-id myTask1 \
--output table ```
-Output is similar to the following:
+Results are similar to the following output:
-```
-Name URL Is Directory Content Length
-- -- -
-stdout.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/stdout.txt False 695
-certs https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/certs True
-wd https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/wd True
-stderr.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/stderr.txt False 0
+```output
+Name URL Is Directory Content Length
+- - -- -
+stdout.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/stdout.txt False 695
+certs https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/certs True
+wd https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/wd True
+stderr.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/stderr.txt False 0
```
-To download one of the output files to a local directory, use the [az batch task file download](/cli/azure/batch/task) command. In this example, task output is in `stdout.txt`.
+The [az batch task file download](/cli/azure/batch/task#az-batch-task-file-download) command downloads output files to a local directory. Run the following example to download the *stdout.txt* file:
```azurecli-interactive az batch task file download \
- --job-id myjob \
- --task-id mytask1 \
+ --job-id myJob \
+ --task-id myTask1 \
--file-path stdout.txt \ --destination ./stdout.txt ```
-You can view the contents of `stdout.txt` in a text editor. The contents show the Azure Batch environment variables that are set on the node. When you create your own Batch jobs, you can reference these environment variables in task command lines, and in the apps and scripts run by the command lines. For example:
+You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run.
-```
-AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myjob/job-1/mytask1
+```text
+AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1
AZ_BATCH_NODE_STARTUP_DIR=/mnt/batch/tasks/startup
-AZ_BATCH_CERTIFICATES_DIR=/mnt/batch/tasks/workitems/myjob/job-1/mytask1/certs
+AZ_BATCH_CERTIFICATES_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1/certs
AZ_BATCH_ACCOUNT_URL=https://mybatchaccount.eastus2.batch.azure.com/
-AZ_BATCH_TASK_WORKING_DIR=/mnt/batch/tasks/workitems/myjob/job-1/mytask1/wd
+AZ_BATCH_TASK_WORKING_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1/wd
AZ_BATCH_NODE_SHARED_DIR=/mnt/batch/tasks/shared AZ_BATCH_TASK_USER=_azbatch AZ_BATCH_NODE_ROOT_DIR=/mnt/batch/tasks
-AZ_BATCH_JOB_ID=myjobl
+AZ_BATCH_JOB_ID=myJobl
AZ_BATCH_NODE_IS_DEDICATED=true AZ_BATCH_NODE_ID=tvm-257509324_2-20180703t215033z
-AZ_BATCH_POOL_ID=mypool
-AZ_BATCH_TASK_ID=mytask1
+AZ_BATCH_POOL_ID=myPool
+AZ_BATCH_TASK_ID=myTask1
AZ_BATCH_ACCOUNT_NAME=mybatchaccount AZ_BATCH_TASK_USER_IDENTITY=PoolNonAdmin ``` ## Clean up resources
-If you want to continue with Batch tutorials and samples, use the Batch account and linked storage account created in this quickstart. There is no charge for the Batch account itself.
+If you want to continue with Batch tutorials and samples, you can use the Batch account and linked storage account that you created in this quickstart. There's no charge for the Batch account itself.
-You are charged for pools while the nodes are running, even if no jobs are scheduled. When you no longer need a pool, delete it with the [az batch pool delete](/cli/azure/batch/pool#az-batch-pool-delete) command. When you delete the pool, all task output on the nodes is deleted.
+Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. When you no longer need a pool, use the [az batch pool delete](/cli/azure/batch/pool#az-batch-pool-delete) command to delete it. Deleting a pool deletes all task output on the nodes, and the nodes themselves.
```azurecli-interactive
-az batch pool delete --pool-id mypool
+az batch pool delete --pool-id myPool
```
-When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, Batch account, pools, and all related resources. Delete the resources as follows:
+When you no longer need any of the resources you created for this quickstart, you can use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all its resources. To delete the resource group and the storage account, Batch account, node pools, and all related resources, run the following command:
```azurecli-interactive
-az group delete --name QuickstartBatch-rg
+az group delete --name qsBatch
``` ## Next steps
-In this quickstart, you created a Batch account, a Batch pool, and a Batch job. The job ran sample tasks, and you viewed output created on one of the nodes. Now that you understand the key concepts of the Batch service, you are ready to try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, continue to the Azure Batch tutorials.
+In this quickstart, you created a Batch account and pool, created and ran a Batch job and tasks, and viewed task output from the nodes. Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch, continue to the Azure Batch tutorials.
> [!div class="nextstepaction"]
-> [Azure Batch tutorials](./tutorial-parallel-dotnet.md)
+> [Tutorial: Run a parallel workload with Azure Batch](./tutorial-parallel-python.md)
batch Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-portal.md
Title: Azure Quickstart - Run your first Batch job in the Azure portal
-description: This quickstart shows how to use the Azure portal to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.
Previously updated : 06/22/2022
+ Title: 'Quickstart: Use the Azure portal to create a Batch account and run a job'
+description: Follow this quickstart to use the Azure portal to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.
Last updated : 04/13/2022
-# Quickstart: Run your first Batch job in the Azure portal
+# Quickstart: Use the Azure portal to create a Batch account and run a job
-Get started with Azure Batch by using the Azure portal to create a Batch account, a pool of compute nodes (virtual machines), and a job that runs tasks on the pool.
+This quickstart shows you how to get started with Azure Batch by using the Azure portal. You create a Batch account that has a pool of virtual machines (VMs), or compute nodes. You then create and run a job with tasks that run on the pool nodes.
-After completing this quickstart, you'll understand the [key concepts of the Batch service](batch-service-workflow-features.md) and be ready to try Batch with more realistic workloads at larger scale.
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-## Create a Batch account
+>[!NOTE]
+>For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
-Follow these steps to create a sample Batch account for test purposes. You need a Batch account to create pools and jobs. You can also link an Azure storage account with the Batch account. Although not required for this quickstart, the storage account is useful to deploy applications and store input and output data for most real-world workloads.
+<a name="create-a-batch-account"></a>
+## Create a Batch account and Azure Storage account
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource**.
+You need a Batch account to create pools and jobs. The following steps create an example Batch account. You also create an Azure Storage account to link to your Batch account. Although this quickstart doesn't use the storage account, most real-world Batch workloads use a linked storage account to deploy applications and store input and output data.
-1. Type "batch service" in the search box, then select **Batch Service**.
+1. Sign in to the [Azure portal](https://portal.azure.com), and search for and select **batch accounts**.
- :::image type="content" source="media/quick-create-portal/marketplace-batch.png" alt-text="Screenshot of Batch Service in the Azure Marketplace.":::
+ :::image type="content" source="media/quick-create-portal/marketplace-batch.png" alt-text="Screenshot of selecting Batch accounts in the Azure portal.":::
-1. Select **Create**.
+1. On the **Batch accounts** page, select **Create**.
-1. In the **Resource group** field, select **Create new** and enter a name for your resource group.
+1. On the **New Batch account** page, enter or select the following values:
-1. Enter a value for **Account name**. This name must be unique within the Azure **Location** selected. It can contain only lowercase letters and numbers, and it must be between 3-24 characters.
+ - Under **Resource group**, select **Create new**, enter the name *qsBatch*, and then select **OK**. The resource group is a logical container that holds the Azure resources for this quickstart.
+ - For **Account name**, enter the name *mybatchaccount*. The Batch account name must be unique within the Azure region you select, can contain only lowercase letters and numbers, and must be between 3-24 characters.
+ - For **Location**, select **East US**.
+ - Under **Storage account**, select the link to **Select a storage account**.
-1. Optionally, under **Storage account**, you can specify a storage account. Click **Select a storage account**, then select an existing storage account or create a new one.
+ :::image type="content" source="media/quick-create-portal/new-batch-account.png" alt-text="Screenshot of the New Batch account page in the Azure portal.":::
-1. Leave the other settings as is. Select **Review + create**, then select **Create** to create the Batch account.
+1. On the **Create storage account** page, under **Name**, enter **mybatchstorage**. Leave the other settings at their defaults, and select **OK**.
-When the **Deployment succeeded** message appears, go to the Batch account that you created.
+1. Select **Review + create** at the bottom of the **New Batch account** page, and when validation passes, select **Create**.
+
+1. When the **Deployment succeeded** message appears, select **Go to resource** to go to the Batch account that you created.
## Create a pool of compute nodes
-Now that you have a Batch account, create a sample pool of Windows compute nodes for test purposes. The pool in this quickstart consists of two nodes running a Windows Server 2019 image from the Azure Marketplace.
+Next, create a pool of Windows compute nodes in your Batch account. The following steps create a pool that consists of two Standard_A1_v2 size VMs running Windows Server 2019. This node size offers a good balance of performance versus cost for this quickstart.
+
+1. On your Batch account page, select **Pools** from the left navigation.
+
+1. On the **Pools** page, select **Add**.
-1. In the Batch account, select **Pools** > **Add**.
+1. On the **Add pool** page, for **Name**, enter *myPool*.
-1. Enter a **Pool ID** called *mypool*.
+1. Under **Operating System**, select the following settings:
+ - **Publisher**: Select **microsoftwindowsserver**.
+ - **Sku**: Select **2019-datacenter-core-smalldisk**.
-1. In **Operating System**, use the following settings (you can explore other options).
-
- |Setting |Value |
- |||
- |**Image Type**|Marketplace|
- |**Publisher** |microsoftwindowsserver|
- |**Offer** |windowsserver|
- |**Sku** |2019-datacenter-core-smalldisk|
+1. Scroll down to **Node size**, and for **VM size**, select **Standard_A1_v2**.
-1. Scroll down to enter **Node Size** and **Scale** settings. The suggested node size offers a good balance of performance versus cost for this quick example.
-
- |Setting |Value |
- |||
- |**Node pricing tier** |Standard_A1_v2|
- |**Target dedicated nodes** |2|
+1. Under **Scale**, for **Target dedicated nodes**, enter *2*.
-1. Keep the defaults for remaining settings, and select **OK** to create the pool.
+1. Accept the defaults for the remaining settings, and select **OK** at the bottom of the page.
-Batch creates the pool immediately, but it takes a few minutes to allocate and start the compute nodes. During this time, the pool's **Allocation state** is **Resizing**. You can go ahead and create a job and tasks while the pool is resizing.
+Batch creates the pool immediately, but takes a few minutes to allocate and start the compute nodes. On the **Pools** page, you can select **myPool** to go to the **myPool** page and see the pool status of **Resizing** under **Essentials** > **Allocation state**. You can proceed to create a job and tasks while the pool state is still **Resizing** or **Starting**.
-After a few minutes, the allocation state changes to **Steady**, and the nodes start. To check the state of the nodes, select the pool and then select **Nodes**. When a node's state is **Idle**, it is ready to run tasks.
+After a few minutes, the **Allocation state** changes to **Steady**, and the nodes start. To check the state of the nodes, select **Nodes** in the **myPool** page left navigation. When a node's state is **Idle**, it's ready to run tasks.
## Create a job
-Now that you have a pool, create a job to run on it. A Batch job is a logical group of one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. The job won't have tasks until you create them.
+Now create a job to run on the pool. A Batch job is a logical group of one or more tasks. The job includes settings common to the tasks, such as priority and the pool to run tasks on. The job doesn't have tasks until you create them.
-1. In the Batch account view, select **Jobs** > **Add**.
+1. On the **mybatchaccount** page, select **Jobs** from the left navigation.
-1. Enter a **Job ID** called *myjob*.
+1. On the **Jobs** page, select **Add**.
-1. In **Pool**, select *mypool*.
+1. On the **Add job** page, for **Job ID**, enter *myJob*.
-1. Keep the defaults for the remaining settings, and select **OK**.
+1. Select **Select pool**, and on the **Select pool** page, select **myPool**, and then select **Select**.
+
+1. On the **Add job** page, select **OK**. Batch creates the job and lists it on the **Jobs** page.
## Create tasks
-Now, select the job to open the **Tasks** page. This is where you'll create sample tasks to run in the job. Typically, you create multiple tasks that Batch queues and distributes to run on the compute nodes. In this example, you create two identical tasks. Each task runs a command line to display the Batch environment variables on a compute node, and then waits 90 seconds.
+Jobs can contain multiple tasks that Batch queues and distributes to run on the compute nodes. Batch provides several ways to deploy apps and scripts to compute nodes. When you create a task, you specify your app or script in a command line.
+
+The following procedure creates and runs two identical tasks in your job. Each task runs a command line that displays the Batch environment variables on the compute node, and then waits 90 seconds.
-When you use Batch, the command line is where you specify your app or script. Batch provides several ways to deploy apps and scripts to compute nodes.
+1. On the **Jobs** page, select **myJob**.
-To create the first task:
+1. On the **Tasks** page, select **Add**.
-1. Select **Add**.
+1. On the **Add task** page, for **Task ID**, enter *myTask1*.
-1. Enter a **Task ID** called *mytask*.
+1. In **Command line**, enter `cmd /c "set AZ_BATCH & timeout /t 90 > NUL"`.
-1. In **Command line**, enter `cmd /c "set AZ_BATCH & timeout /t 90 > NUL"`. Keep the defaults for the remaining settings, and select **Submit**.
+1. Accept the defaults for the remaining settings, and select **Submit**.
-Repeat the steps above to create a second task. Enter a different **Task ID** such as *mytask2*, but use the same command line.
+1. Repeat the preceding steps to create a second task, but enter *myTask2* for **Task ID**.
-After you create a task, Batch queues it to run on the pool. When a node is available to run it, the task runs. In our example, if the first task is still running on one node, Batch will start the second task on the other node in the pool.
+After you create each task, Batch queues it to run on the pool. Once a node is available, the task runs on the node. In the quickstart example, if the first task is still running on one node, Batch starts the second task on the other node in the pool.
## View task output
-The example tasks you created will complete in a couple of minutes. To view the output of a completed task, select the task, then select the file `stdout.txt` to view the standard output of the task. The contents are similar to the following example:
+The tasks should complete in a couple of minutes. To update task status, select **Refresh** at the top of the **Tasks** page.
+
+To view the output of a completed task, you can select the task from the **Tasks** page. On the **myTask1** page, select the *stdout.txt* file to view the standard output of the task.
-The contents show the Azure Batch environment variables that are set on the node. When you create your own Batch jobs and tasks, you can reference these environment variables in task command lines, and in the apps and scripts run by the command lines.
+The contents of the *stdout.txt* file are similar to the following example:
++
+The standard output for this task shows the Azure Batch environment variables that are set on the node. As long as this node exists, you can refer to these environment variables in Batch job task command lines, and in the apps and scripts the command lines run.
## Clean up resources
-If you want to continue with Batch tutorials and samples, you can keep using the Batch account and linked storage account created in this quickstart. There is no charge for the Batch account itself.
+If you want to continue with Batch tutorials and samples, you can use the Batch account and linked storage account that you created in this quickstart. There's no charge for the Batch account itself.
+
+Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. When you no longer need a pool, delete it.
-You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it. In the account view, select **Pools** and the name of the pool. Then select **Delete**. After you delete the pool, all task output on the nodes is deleted.
+To delete a pool:
-When no longer needed, delete the resource group, Batch account, and all related resources. To do so, select the resource group for the Batch account and select **Delete resource group**.
+1. On your Batch account page, select **Pools** from the left navigation.
+1. On the **Pools** page, select the pool to delete, and then select **Delete**.
+1. On the **Delete pool** screen, enter the name of the pool, and then select **Delete**.
+
+Deleting a pool deletes all task output on the nodes, and the nodes themselves.
+
+When you no longer need any of the resources you created for this quickstart, you can delete the resource group and all its resources, including the storage account, Batch account, and node pools. To delete the resource group, select **Delete resource group** at the top of the **qsBatch** resource group page. On the **Delete a resource group** screen, enter the resource group name *qsBatch*, and then select **Delete**.
## Next steps
-In this quickstart, you created a Batch account, a Batch pool, and a Batch job. The job ran sample tasks, and you viewed output created on one of the nodes. Now that you understand the key concepts of the Batch service, you are ready to try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, continue to the Azure Batch tutorials.
+In this quickstart, you created a Batch account and pool, and created and ran a Batch job and tasks. You monitored node and task status, and viewed task output from the nodes.
+
+Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch, continue to the Azure Batch tutorials.
> [!div class="nextstepaction"] > [Azure Batch tutorials](./tutorial-parallel-dotnet.md)
batch Quick Run Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-run-python.md
Title: Quickstart - Use Python API to run an Azure Batch job
-description: In this quickstart, you run an Azure Batch sample job and tasks using the Batch Python client library. Learn the key concepts of the Batch service.
Previously updated : 09/10/2021
+ Title: 'Quickstart: Use Python to create a pool and run a job'
+description: Follow this quickstart to run an app that uses the Azure Batch client library for Python to create and run Batch pools, nodes, jobs, and tasks.
Last updated : 04/13/2023 ms.devlang: python
-# Quickstart: Use Python API to run an Azure Batch job
+# Quickstart: Use Python to create a Batch pool and run a job
-Get started with Azure Batch by using the Python API to run an Azure Batch job from an app. The app uploads input data files to Azure Storage and creates a pool of Batch compute nodes (virtual machines). It then creates a job that runs tasks to process each input file in the pool using a basic command.
+This quickstart shows you how to get started with Azure Batch by running an app that uses the [Azure Batch libraries for Python](/python/api/overview/azure/batch). The Python app:
-After completing this quickstart, you'll understand key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
+> [!div class="checklist"]
+> - Uploads several input data files to an Azure Storage blob container to use for Batch task processing.
+> - Creates a pool of two virtual machines (VMs), or compute nodes, running Ubuntu 20.04 LTS OS.
+> - Creates a job and three tasks to run on the nodes. Each task processes one of the input files by using a Bash shell command line.
+> - Displays the output files that the tasks return.
-![Overview of the Azure Batch workflow](./media/quick-run-python/overview-of-the-azure-batch-workflow.png)
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+- A Batch account with a linked Azure Storage account. You can create the accounts by using any of the following methods: [Azure CLI](quick-create-cli.md) | [Azure portal](quick-create-portal.md) | [Bicep](quick-create-bicep.md) | [ARM template](quick-create-template.md) | [Terraform](quick-create-terraform.md).
-- [Python](https://python.org/downloads) version 3.6 or later, including the [pip](https://pip.pypa.io/en/stable/installing/) package manager.
+- [Python](https://python.org/downloads) version 3.6 or later, which includes the [pip](https://pip.pypa.io/en/stable/installing) package manager.
-## Sign in to Azure
+## Run the app
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+To complete this quickstart, you download or clone the Python app, provide your account values, run the app, and verify the output.
+### Download or clone the app
-## Download the sample
+1. Download or clone the [Azure Batch Python Quickstart](https://github.com/Azure-Samples/batch-python-quickstart) app from GitHub. Use the following command to clone the app repo with a Git client:
-[Download or clone the sample app](https://github.com/Azure-Samples/batch-python-quickstart) from GitHub. To clone the sample app repo with a Git client, use the following command:
+ ```bash
+ git clone https://github.com/Azure-Samples/batch-python-quickstart.git
+ ```
-```bash
-git clone https://github.com/Azure-Samples/batch-python-quickstart.git
-```
+1. Switch to the *batch-python-quickstart/src* folder, and install the required packages by using `pip`.
-Go to the directory that contains the Python script `python_quickstart_client.py`.
+ ```bash
+ pip install -r requirements.txt
+ ```
-In your Python development environment, install the required packages using `pip`.
+### Provide your account information
-```bash
-pip install -r requirements.txt
-```
+The Python app needs to use your Batch and Storage account names, account key values, and Batch account endpoint. You can get this information from the Azure portal, Azure APIs, or command-line tools.
+
+To get your account information from the [Azure portal](https://portal.azure.com):
+
+ 1. From the Azure Search bar, search for and select your Batch account name.
+ 1. On your Batch account page, select **Keys** from the left navigation.
+ 1. On the **Keys** page, copy the following values:
+
+ - **Batch account**
+ - **Account endpoint**
+ - **Primary access key**
+ - **Storage account name**
+ - **Key1**
-Open the file `config.py`. Update the Batch and storage account credential strings with the values you obtained for your accounts. For example:
+In your downloaded Python app, edit the following strings in the *config.py* file to supply the values you copied.
-```Python
-BATCH_ACCOUNT_NAME = 'mybatchaccount'
-BATCH_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ=='
-BATCH_ACCOUNT_URL = 'https://mybatchaccount.mybatchregion.batch.azure.com'
-STORAGE_ACCOUNT_NAME = 'mystorageaccount'
-STORAGE_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ=='
+```python
+BATCH_ACCOUNT_NAME = '<batch account>'
+BATCH_ACCOUNT_KEY = '<primary access key>'
+BATCH_ACCOUNT_URL = '<account endpoint>'
+STORAGE_ACCOUNT_NAME = '<storage account name>'
+STORAGE_ACCOUNT_KEY = '<key1>'
```
-## Run the app
+>[!IMPORTANT]
+>Exposing account keys in the app source isn't recommended for Production usage. You should restrict access to credentials and refer to them in your code by using variables or a configuration file. It's best to store Batch and Storage account keys in Azure Key Vault.
+
+### Run the app and view output
-To see the Batch workflow in action, run the script:
+Run the app to see the Batch workflow in action.
```bash python python_quickstart_client.py ```
-After running the script, review the code to learn what each part of the application does.
+Typical run time is approximately three minutes. Initial pool node setup takes the most time.
-When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started. Tasks are queued to run as soon as the first compute node is running. Go to your Batch account in the [Azure portal](https://portal.azure.com) to monitor the pool, compute nodes, job, and tasks in your Batch account.
+The app returns output similar to the following example:
```output
-Sample start: 11/26/2018 4:02:54 PM
+Sample start: 11/26/2012 4:02:54 PM
Uploading file taskdata0.txt to container [input]... Uploading file taskdata1.txt to container [input]...
Adding 3 tasks to job [PythonQuickstartJob]...
Monitoring all tasks for 'Completed' state, timeout in 00:30:00... ```
-After tasks complete, you see output similar to the following for each task:
+There's a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes start. As tasks are created, Batch queues them to run on the pool. As soon as the first compute node is available, the first task runs on the node. You can monitor node, task, and job status from your Batch account page in the Azure portal.
+
+After each task completes, you see output similar to the following example:
```output Printing task output... Task: Task0 Node: tvm-2850684224_3-20171205t000401z Standard output:
-Batch processing began with mainframe computers and punch cards. Today it still plays a central role in business, engineering, science, and other pursuits that require running lots of automated tasks....
-...
+Batch processing began with mainframe computers and punch cards. Today it still plays a central role...
```
-Typical execution time is approximately 3 minutes when you run the application in its default configuration. Initial pool setup takes the most time.
- ## Review the code
-The Python app in this quickstart does the following:
--- Uploads three small text files to a blob container in your Azure storage account. These files are inputs for processing by Batch tasks.-- Creates a pool of two compute nodes running Ubuntu 20.04 LTS.-- Creates a job and three tasks to run on the nodes. Each task processes one of the input files using a Bash shell command line.-- Displays files returned by the tasks.
+Review the code to understand the steps in the [Azure Batch Python Quickstart](https://github.com/Azure-Samples/batch-python-quickstart).
-See the file `python_quickstart_client.py` and the following sections for details.
+### Create service clients and upload resource files
-### Preliminaries
+1. The app creates a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object to interact with the Storage account.
-To interact with a storage account, the app creates a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object.
+ ```python
+ blob_service_client = BlobServiceClient(
+ account_url=f"https://{config.STORAGE_ACCOUNT_NAME}.{config.STORAGE_ACCOUNT_DOMAIN}/",
+ credential=config.STORAGE_ACCOUNT_KEY
+ )
+ ```
-```python
-blob_service_client = BlobServiceClient(
- account_url=f"https://{config.STORAGE_ACCOUNT_NAME}.{config.STORAGE_ACCOUNT_DOMAIN}/",
- credential=config.STORAGE_ACCOUNT_KEY
- )
-```
-
-The app uses the `blob_service_client` reference to create a container in the storage account and to upload data files to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
-
-```python
-input_file_paths = [os.path.join(sys.path[0], 'taskdata0.txt'),
- os.path.join(sys.path[0], 'taskdata1.txt'),
- os.path.join(sys.path[0], 'taskdata2.txt')]
+1. The app uses the `blob_service_client` reference to create a container in the Storage account and upload data files to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
-input_files = [
- upload_file_to_container(blob_service_client, input_container_name, file_path)
- for file_path in input_file_paths]
-```
-
-The app creates a [BatchServiceClient](/python/api/azure.batch.batchserviceclient) object to create and manage pools, jobs, and tasks in the Batch service. The Batch client in the sample uses shared key authentication. Batch also supports Azure Active Directory authentication.
+ ```python
+ input_file_paths = [os.path.join(sys.path[0], 'taskdata0.txt'),
+ os.path.join(sys.path[0], 'taskdata1.txt'),
+ os.path.join(sys.path[0], 'taskdata2.txt')]
+
+ input_files = [
+ upload_file_to_container(blob_service_client, input_container_name, file_path)
+ for file_path in input_file_paths]
+ ```
-```python
-credentials = SharedKeyCredentials(config.BATCH_ACCOUNT_NAME,
- config.BATCH_ACCOUNT_KEY)
+1. The app creates a [BatchServiceClient](/python/api/azure.batch.batchserviceclient) object to create and manage pools, jobs, and tasks in the Batch account. The Batch client uses shared key authentication. Batch also supports Azure Active Directory (Azure AD) authentication.
- batch_client = BatchServiceClient(
- credentials,
- batch_url=config.BATCH_ACCOUNT_URL)
-```
+ ```python
+ credentials = SharedKeyCredentials(config.BATCH_ACCOUNT_NAME,
+ config.BATCH_ACCOUNT_KEY)
+
+ batch_client = BatchServiceClient(
+ credentials,
+ batch_url=config.BATCH_ACCOUNT_URL)
+ ```
### Create a pool of compute nodes
-To create a Batch pool, the app uses the [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS image published in the Azure Marketplace. Batch supports a wide range of Linux and Windows Server images in the Azure Marketplace, as well as custom VM images.
+To create a Batch pool, the app uses the [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and pool configuration. The following [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS Azure Marketplace image. Batch supports a wide range of Linux and Windows Server Marketplace images, and also supports custom VM images.
-The number of nodes (`POOL_NODE_COUNT`) and VM size (`POOL_VM_SIZE`) are defined constants. The sample by default creates a pool of 2 size *Standard_DS1_v2* nodes. The size suggested offers a good balance of performance versus cost for this quick example.
+The `POOL_NODE_COUNT` and `POOL_VM_SIZE` are defined constants. The app creates a pool of two size Standard_DS1_v2 nodes. This size offers a good balance of performance versus cost for this quickstart.
-The [pool.add](/python/api/azure-batch/azure.batch.operations.pooloperations) method submits the pool to the Batch service.
+The [pool.add](/python/api/azure-batch/azure.batch.operations.pooloperations#azure-batch-operations-pooloperations-add) method submits the pool to the Batch service.
```python new_pool = batchmodels.PoolAddParameter(
new_pool = batchmodels.PoolAddParameter(
### Create a Batch job
-A Batch job is a logical grouping of one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. The app uses the [JobAddParameter](/python/api/azure-batch/azure.batch.models.jobaddparameter) class to create a job on your pool. The [job.add](/python/api/azure-batch/azure.batch.operations.joboperations) method adds a job to the specified Batch account. Initially the job has no tasks.
+A Batch job is a logical grouping of one or more tasks. The job includes settings common to the tasks, such as priority and the pool to run tasks on.
+
+The app uses the [JobAddParameter](/python/api/azure-batch/azure.batch.models.jobaddparameter) class to create a job on the pool. The [job.add](/python/api/azure-batch/azure.batch.operations.joboperations) method adds the job to the specified Batch account. Initially the job has no tasks.
```python job = batchmodels.JobAddParameter(
batch_service_client.job.add(job)
### Create tasks
-The app creates a list of task objects using the [TaskAddParameter](/python/api/azure-batch/azure.batch.models.taskaddparameter) class. Each task processes an input `resource_files` object using a `command_line` parameter. In the sample, the command line runs the Bash shell `cat` command to display the text file. This command is a simple example for demonstration purposes. When you use Batch, the command line is where you specify your app or script. Batch provides a number of ways to deploy apps and scripts to compute nodes.
+Batch provides several ways to deploy apps and scripts to compute nodes. This app creates a list of task objects by using the [TaskAddParameter](/python/api/azure-batch/azure.batch.models.taskaddparameter) class. Each task processes an input file by using a `command_line` parameter to specify an app or script.
-Then, the app adds tasks to the job with the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations) method, which queues them to run on the compute nodes.
+The following script processes the input `resource_files` objects by running the Bash shell `cat` command to display the text files. The app then uses the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations#azure-batch-operations-taskoperations-add-collection) method to add each task to the job, which queues the tasks to run on the compute nodes.
```python tasks = []
batch_service_client.task.add_collection(job_id, tasks)
### View task output
-The app monitors task state to make sure the tasks complete. Then, the app displays the `stdout.txt` file generated by each completed task. When the task runs successfully, the output of the task command is written to `stdout.txt`:
+The app monitors task state to make sure the tasks complete. When each task runs successfully, the task command output writes to the *stdout.txt* file. The app then displays the *stdout.txt* file for each completed task.
```python tasks = batch_service_client.task.list(job_id)
for task in tasks:
## Clean up resources
-The app automatically deletes the storage container it creates, and gives you the option to delete the Batch pool and job. You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it. When you delete the pool, all task output on the nodes is deleted.
+The app automatically deletes the storage container it creates, and gives you the option to delete the Batch pool and job. Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. If you no longer need the pool, delete it.
-When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and select **Delete resource group**.
+When you no longer need your Batch resources, you can delete the resource group that contains them. In the Azure portal, select **Delete resource group** at the top of the resource group page. On the **Delete a resource group** screen, enter the resource group name, and then select **Delete**.
## Next steps
-In this quickstart, you ran a small app built using the Batch Python API to create a Batch pool and a Batch job. The job ran sample tasks, and downloaded output created on the nodes. Now that you understand the key concepts of the Batch service, you are ready to try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, and walk through a parallel workload with a real-world application, continue to the Batch Python tutorial.
+In this quickstart, you ran an app that uses the Batch Python API to create a Batch pool, nodes, job, and tasks. The job uploaded resource files to a storage container, ran tasks on the nodes, and displayed output from the nodes.
+
+Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch and walk through a parallel workload with a real-world application, continue to the Batch Python tutorial.
> [!div class="nextstepaction"] > [Process a parallel workload with Python](tutorial-parallel-python.md)
batch Tutorial Parallel Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-dotnet.md
Title: Tutorial - Run a parallel workload using the .NET API
-description: Tutorial - Transcode media files in parallel with ffmpeg in Azure Batch using the Batch .NET client library
+ Title: "Tutorial: Run a parallel workload using the .NET API"
+description: Learn how to transcode media files in parallel using ffmpeg in Azure Batch with the Batch .NET client library.
ms.devlang: csharp Previously updated : 06/22/2022 Last updated : 04/19/2023 # Tutorial: Run a parallel workload with Azure Batch using the .NET API
-Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a C# example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources. You learn how to:
+Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a C# example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources.
> [!div class="checklist"]
-> * Add an application package to your Batch account
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
+> * Add an application package to your Batch account.
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
-In this tutorial, you convert MP4 media files in parallel to MP3 format using the [ffmpeg](https://ffmpeg.org/) open-source tool.
+In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org) open-source tool.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)] ## Prerequisites
-* [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1) for Linux, macOS, or Windows.
+* [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet) for Linux, macOS, or Windows.
-* A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+* A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstart guides for the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
-* Download the appropriate version of ffmpeg for your use case to your local computer. This tutorial and the related sample app use the [Windows 64-bit version of ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08). For this tutorial, you only need the zip file. You do not need to unzip the file or install it locally.
+* Download the appropriate version of ffmpeg for your use case to your local computer. This tutorial and the related sample app use the [Windows 64-bit full-build version of ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08). For this tutorial, you only need the zip file. You do not need to unzip the file or install it locally.
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to [the Azure portal](https://portal.azure.com).
## Add an application package Use the Azure portal to add ffmpeg to your Batch account as an [application package](batch-application-packages.md). Application packages help you manage task applications and their deployment to the compute nodes in your pool.
-1. In the Azure portal, click **More services** > **Batch accounts**, and click the name of your Batch account.
-3. Click **Applications** > **Add**.
-4. For **Application id** enter *ffmpeg*, and a package version of *4.3.1*. Select the ffmpeg zip file you downloaded previously, and then click **OK**. The ffmpeg application package is added to your Batch account.
+1. In the Azure portal, click **More services** > **Batch accounts**, and select the name of your Batch account.
-![Add application package](./media/tutorial-parallel-dotnet/add-application.png)
+1. Click **Applications** > **Add**.
+
+ :::image type="content" source="./media/tutorial-parallel-dotnet/add-application.png" alt-text="Screenshot of the Applications section of the batch account.":::
+
+1. Enter *ffmpeg* in the **Application Id** field, and a package version of *4.3.1* in the **Version** field. Select the ffmpeg zip file that you downloaded, and then select **Submit**. The ffmpeg application package is added to your Batch account.
+
+ :::image type="content" source="./media/tutorial-parallel-dotnet/new-batch-application.png" alt-text="Screenshot of the ID and version fields in the Add application section.":::
[!INCLUDE [batch-common-credentials](../../includes/batch-common-credentials.md)]
-## Download and run the sample
+## Download and run the sample app
-### Download the sample
+### Download the sample app
[Download or clone the sample app](https://github.com/Azure-Samples/batch-dotnet-ffmpeg-tutorial) from GitHub. To clone the sample app repo with a Git client, use the following command:
Use the Azure portal to add ffmpeg to your Batch account as an [application pack
git clone https://github.com/Azure-Samples/batch-dotnet-ffmpeg-tutorial.git ```
-Navigate to the directory that contains the Visual Studio solution file `BatchDotNetFfmpegTutorial.sln`.
+Navigate to the directory that contains the Visual Studio solution file *BatchDotNetFfmpegTutorial.sln*.
-Open the solution file in Visual Studio, and update the credential strings in `Program.cs` with the values you obtained for your accounts. For example:
+Open the solution file in Visual Studio, and update the credential strings in *Program.cs* with the values you obtained for your accounts. For example:
```csharp // Batch account credentials
-private const string BatchAccountName = "mybatchaccount";
+private const string BatchAccountName = "yourbatchaccount";
private const string BatchAccountKey = "xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ==";
-private const string BatchAccountUrl = "https://mybatchaccount.mybatchregion.batch.azure.com";
+private const string BatchAccountUrl = "https://yourbatchaccount.yourbatchregion.batch.azure.com";
// Storage account credentials
-private const string StorageAccountName = "mystorageaccount";
+private const string StorageAccountName = "yourstorageaccount";
private const string StorageAccountKey = "xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ=="; ```
const string appPackageVersion = "4.3.1";
Build and run the application in Visual Studio, or at the command line with the `dotnet build` and `dotnet run` commands. After running the application, review the code to learn what each part of the application does. For example, in Visual Studio:
-* Right-click the solution in Solution Explorer and click **Build Solution**.
+1. Right-click the solution in Solution Explorer and select **Build Solution**.
-* Confirm the restoration of any NuGet packages, if you're prompted. If you need to download missing packages, ensure the [NuGet Package Manager](https://docs.nuget.org/consume/installing-nuget) is installed.
+1. Confirm the restoration of any NuGet packages, if you're prompted. If you need to download missing packages, ensure the [NuGet Package Manager](https://docs.nuget.org/consume/installing-nuget) is installed.
-Then run it. When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
+1. Run the solution. When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
``` Sample start: 11/19/2018 3:20:21 PM
Sample end: 11/19/2018 3:29:36 PM
Elapsed time: 00:09:14.3418742 ```
-Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, click **Pools** > *WinFFmpegPool*.
+Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, click **Pools** > **WinFFmpegPool**.
When tasks are running, the heat map is similar to the following:
-![Pool heat map](./media/tutorial-parallel-dotnet/pool.png)
-Typical execution time is approximately **10 minutes** when you run the application in its default configuration. Pool creation takes the most time.
+Typical execution time is approximately *10 minutes* when you run the application in its default configuration. Pool creation takes the most time.
[!INCLUDE [batch-common-tutorial-download](../../includes/batch-common-tutorial-download.md)] ## Review the code
-The following sections break down the sample application into the steps that it performs to process a workload in the Batch service. Refer to the file `Program.cs` in the solution while you read the rest of this article, since not every line of code in the sample is discussed.
+The following sections break down the sample application into the steps that it performs to process a workload in the Batch service. Refer to the file *Program.cs* in the solution while you read the rest of this article, since not every line of code in the sample is discussed.
### Authenticate Blob and Batch clients
CreateContainerIfNotExistAsync(blobClient, inputContainerName);
CreateContainerIfNotExistAsync(blobClient, outputContainerName); ```
-Then, files are uploaded to the input container from the local `InputFiles` folder. The files in storage are defined as Batch [ResourceFile](/dotnet/api/microsoft.azure.batch.resourcefile) objects that Batch can later download to compute nodes.
+Then, files are uploaded to the input container from the local *InputFiles* folder. The files in storage are defined as Batch [ResourceFile](/dotnet/api/microsoft.azure.batch.resourcefile) objects that Batch can later download to compute nodes.
-Two methods in `Program.cs` are involved in uploading the files:
+Two methods in *Program.cs* are involved in uploading the files:
-* `UploadFilesToContainerAsync`: Returns a collection of ResourceFile objects and internally calls `UploadResourceFileToContainerAsync` to upload each file that is passed in the `inputFilePaths` parameter.
-* `UploadResourceFileToContainerAsync`: Uploads each file as a blob to the input container. After uploading the file, it obtains a shared access signature (SAS) for the blob and returns a ResourceFile object to represent it.
+* `UploadFilesToContainerAsync`: Returns a collection of `ResourceFile` objects and internally calls `UploadResourceFileToContainerAsync` to upload each file that is passed in the `inputFilePaths` parameter.
+* `UploadResourceFileToContainerAsync`: Uploads each file as a blob to the input container. After uploading the file, it obtains a shared access signature (SAS) for the blob and returns a `ResourceFile` object to represent it.
```csharp string inputPath = Path.Combine(Environment.CurrentDirectory, "InputFiles");
Next, the sample creates a pool of compute nodes in the Batch account with a cal
The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure does not have enough capacity. The sample by default creates a pool containing only 5 Spot nodes in size *Standard_A1_v2*. >[!Note]
->Be sure you check your node quotas. See [Batch service quotas and limits](batch-quota-limit.md#increase-a-quota) for instructions on how to create a quota request."
+>Be sure you check your node quotas. See [Batch service quotas and limits](batch-quota-limit.md#increase-a-quota) for instructions on how to create a quota request.
The ffmpeg application is deployed to the compute nodes by adding an [ApplicationPackageReference](/dotnet/api/microsoft.azure.batch.applicationpackagereference) to the pool configuration.
The sample creates an [OutputFile](/dotnet/api/microsoft.azure.batch.outputfile)
Then, the sample adds tasks to the job with the [AddTaskAsync](/dotnet/api/microsoft.azure.batch.joboperations.addtaskasync) method, which queues them to run on the compute nodes.
-Replace the executable's file path with the name of the version that you downloaded. This sample code uses the example `ffmpeg-4.3.1-2020-09-21-full_build`.
+Replace the executable's file path with the name of the version that you downloaded. This sample code uses the example `ffmpeg-4.3.1-2020-11-08-full_build`.
```csharp // Create a collection to hold the tasks added to the job.
When no longer needed, delete the resource group, Batch account, and storage acc
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Add an application package to your Batch account
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
-
-For more examples of using the .NET API to schedule and process Batch workloads, see the samples on GitHub.
-
-> [!div class="nextstepaction"]
-> [Batch C# samples](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp)
+> * Add an application package to your Batch account.
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
+
+For more examples of using the .NET API to schedule and process Batch workloads, see the [Batch C# samples on GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp).
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Title: Tutorial - Run a parallel workload using the Python API
-description: Tutorial - Process media files in parallel with ffmpeg in Azure Batch using the Batch Python client library
+ Title: "Tutorial: Run a parallel workload using the Python API"
+description: Learn how to process media files in parallel using ffmpeg in Azure Batch with the Batch Python client library.
ms.devlang: python Previously updated : 12/13/2021 Last updated : 04/19/2023 # Tutorial: Run a parallel workload with Azure Batch using the Python API
-Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a Python example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources. You learn how to:
+Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a Python example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources.
> [!div class="checklist"]
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
-In this tutorial, you convert MP4 media files in parallel to MP3 format using the [ffmpeg](https://ffmpeg.org/) open-source tool.
+In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)] ## Prerequisites
-* [Python version 3.7+](https://www.python.org/downloads/)
+* [Python version 3.7 or later](https://www.python.org/downloads/)
-* [pip](https://pip.pypa.io/en/stable/installing/) package manager
+* [pip package manager](https://pip.pypa.io/en/stable/installation/)
-* An Azure Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+* An Azure Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstart guides for [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
[!INCLUDE [batch-common-credentials](../../includes/batch-common-credentials.md)]
-## Download and run the sample
+## Download and run the sample app
-### Download the sample
+### Download the sample app
[Download or clone the sample app](https://github.com/Azure-Samples/batch-python-ffmpeg-tutorial) from GitHub. To clone the sample app repo with a Git client, use the following command:
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
git clone https://github.com/Azure-Samples/batch-python-ffmpeg-tutorial.git ```
-Navigate to the directory that contains the file `batch_python_tutorial_ffmpeg.py`.
+Navigate to the directory that contains the file *batch_python_tutorial_ffmpeg.py*.
In your Python environment, install the required packages using `pip`.
In your Python environment, install the required packages using `pip`.
pip install -r requirements.txt ```
-Open the file `config.py`. Update the Batch and storage account credential strings with the values unique to your accounts. For example:
+Use a code editor to open the file *config.py*. Update the Batch and storage account credential strings with the values unique to your accounts. For example:
```Python
-_BATCH_ACCOUNT_NAME = 'mybatchaccount'
+_BATCH_ACCOUNT_NAME = 'yourbatchaccount'
_BATCH_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ=='
-_BATCH_ACCOUNT_URL = 'https://mybatchaccount.mybatchregion.batch.azure.com'
+_BATCH_ACCOUNT_URL = 'https://yourbatchaccount.yourbatchregion.batch.azure.com'
_STORAGE_ACCOUNT_NAME = 'mystorageaccount' _STORAGE_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ==' ```
Sample end: 11/28/2018 3:29:36 PM
Elapsed time: 00:09:14.3418742 ```
-Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, click **Pools** > *LinuxFFmpegPool*.
+Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, select **Pools** > **LinuxFFmpegPool**.
When tasks are running, the heat map is similar to the following:
-![Pool heat map](./media/tutorial-parallel-python/pool.png)
-Typical execution time is approximately **5 minutes** when you run the application in its default configuration. Pool creation takes the most time.
+Typical execution time is approximately *5 minutes* when you run the application in its default configuration. Pool creation takes the most time.
[!INCLUDE [batch-common-tutorial-download](../../includes/batch-common-tutorial-download.md)]
batch_client = batch.BatchServiceClient(
### Upload input files
-The app uses the `blob_client` reference create a storage container for the input MP4 files and a container for the task output. Then, it calls the `upload_file_to_container` function to upload MP4 files in the local `InputFiles` directory to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
+The app uses the `blob_client` reference create a storage container for the input MP4 files and a container for the task output. Then, it calls the `upload_file_to_container` function to upload MP4 files in the local *InputFiles* directory to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
```python blob_client.create_container(input_container_name, fail_on_exist=False)
input_files = [
Next, the sample creates a pool of compute nodes in the Batch account with a call to `create_pool`. This defined function uses the Batch [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 18.04 LTS image published in the Azure Marketplace. Batch supports a wide range of VM images in the Azure Marketplace, as well as custom VM images.
-The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure does not have enough capacity. The sample by default creates a pool containing only 5 Spot nodes in size *Standard_A1_v2*.
+The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
In addition to physical node properties, this pool configuration includes a [StartTask](/python/api/azure-batch/azure.batch.models.starttask) object. The StartTask executes on each node as that node joins the pool, and each time a node is restarted. In this example, the StartTask runs Bash shell commands to install the ffmpeg package and dependencies on the nodes.
while datetime.datetime.now() < timeout_expiration:
After it runs the tasks, the app automatically deletes the input storage container it created, and gives you the option to delete the Batch pool and job. The BatchClient's [JobOperations](/python/api/azure-batch/azure.batch.operations.joboperations) and [PoolOperations](/python/api/azure-batch/azure.batch.operations.pooloperations) classes both have delete methods, which are called if you confirm deletion. Although you're not charged for jobs and tasks themselves, you are charged for compute nodes. Thus, we recommend that you allocate pools only as needed. When you delete the pool, all task output on the nodes is deleted. However, the input and output files remain in the storage account.
-When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and click **Delete resource group**.
+When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and choose **Delete resource group**.
## Next steps In this tutorial, you learned how to: > [!div class="checklist"]
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
-
-For more examples of using the Python API to schedule and process Batch workloads, see the samples on GitHub.
-
-> [!div class="nextstepaction"]
-> [Batch Python samples](https://github.com/Azure/azure-batch-samples/tree/master/Python/Batch)
-
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
+
+For more examples of using the Python API to schedule and process Batch workloads, see the [Batch Python samples](https://github.com/Azure/azure-batch-samples/tree/master/Python/Batch) on GitHub.
chaos-studio Chaos Studio Tutorial Dynamic Target Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md
+
+ Title: Create a chaos experiment to shut down all targets in a zone
+description: Use the Azure portal to create an experiment that uses dynamic targeting to select hosts in a zone
++++ Last updated : 4/19/2023+++
+# Create a chaos experiment to shut down all targets in a zone
+
+You can use dynamic targeting in a chaos experiment to choose a set of targets to run an experiment against, based on criteria evaluated at experiment runtime. This guide shows how you can dynamically target a Virtual Machine Scale Set to shut down instances based on availability zone. Running this experiment can help you test failover to a Virtual Machine Scale Sets instance in a different region if there's an outage.
+
+These same steps can be used to set up and run an experiment for any fault that supports dynamic targeting. Currently, only Virtual Machine Scale Sets shutdown supports dynamic targeting.
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure Virtual Machine Scale Sets instance.
+
+## Enable Chaos Studio on your Virtual Machine Scale Sets
+
+Chaos Studio can't inject faults against a resource until that resource has been onboarded to Chaos Studio. To onboard a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Virtual Machine Scale Sets only has one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`), but other resources may have up to two target types (one for service-direct faults and one for agent-based faults) and many capabilities.
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Search for **Chaos Studio** in the search bar.
+1. Select **Targets** and find your Virtual Machine Scale Sets resource.
+1. With the Virtual Machine Scale Sets resource selected, select **Enable targets** and **Enable service-direct targets**.
+[ ![A screenshot showing the targets screen within Chaos Studio, with the VMSS resource selected.](images/tutorial-dynamic-targets-enable.png) ](images/tutorial-dynamic-targets-enable.png#lightbox)
+1. Select **Review + Enable** and **Enable**.
+
+You've now successfully onboarded your Virtual Machine Scale Set to Chaos Studio.
+
+## Create an experiment
+
+With your Virtual Machine Scale Sets now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel.
+
+1. Within Chaos Studio, navigate to **Experiments** and select **Create**.
+[ ![A screenshot showing the Experiments screen, with the Create button highlighted.](images/tutorial-dynamic-targets-experiment-browse.png)](images/tutorial-dynamic-targets-experiment-browse.png#lightbox)
+1. Add a name for your experiment that complies with resource naming guidelines, and select **Next: Experiment designer**.
+[ ![A screenshot showing the experiment creation screen, with the Next button highlighted.](images/tutorial-dynamic-targets-create-exp.png)](images/tutorial-dynamic-targets-create-exp.png#lightbox)
+1. Within Step 1 and Branch 1, select **Add action**, then **Add fault**.
+[ ![A screenshot showing the experiment creation screen, with the Add Fault button highlighted.](images/tutorial-dynamic-targets-experiment-fault.png)](images/tutorial-dynamic-targets-experiment-fault.png#lightbox)
+1. Select the **VMSS Shutdown (version 2.0)** fault. Choose your desired duration and whether you want the shutdown to be abrupt, then select **Next: Target resources**.
+[ ![A screenshot showing the fault details view.](images/tutorial-dynamic-targets-fault-details.png)](images/tutorial-dynamic-targets-fault-details.png#lightbox)
+1. Choose the Virtual Machine Scale Sets resource that you want to use in the experiment, then select **Next: Scope**.
+[ ![A screenshot showing the fault details view, with the VMSS resource selected.](images/tutorial-dynamic-targets-fault-resources.png)](images/tutorial-dynamic-targets-fault-resources.png#lightbox)
+1. In the Zones dropdown, select the zone where you want Virtual Machines in the Virtual Machine Scale Sets instance to be shut down, then select **Add**.
+[ ![A screenshot showing the fault details view, with only Zone 1 selected.](images/tutorial-dynamic-targets-fault-zones.png)](images/tutorial-dynamic-targets-fault-zones.png#lightbox)
+1. Select **Review + create** and then **Create** to save the experiment.
+
+## Give experiment permission to your Virtual Machine Scale Sets
+
+When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. These steps can be used for any resource and target type by modifying the role assignment in step #3 to match the [appropriate role for that resource and target type](chaos-studio-fault-providers.md).
+
+1. Navigate to your Virtual Machine Scale Sets resource and select **Access control (IAM)**, then select **Add role assignment**.
+[ ![A screenshot of the Virtual Machine Scale Sets resource page.](images/tutorial-dynamic-targets-vmss-iam.png)](images/tutorial-dynamic-targets-vmss-iam.png#lightbox)
+3. In the **Role** tab, choose **Virtual Machine Contributor** and then select **Next**.
+[ ![A screenshot of the access control overview for Virtual Machine Scale Sets.](images/tutorial-dynamic-targets-role-selection.png)](images/tutorial-dynamic-targets-role-selection.png#lightbox)
+1. Choose **Select members** and search for your experiment name. Choose your experiment and then **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added.
+[ ![A screenshot of the access control overview.](images/tutorial-dynamic-targets-role-assignment.png)](images/tutorial-dynamic-targets-role-assignment.png#lightbox)
+1. Select **Review + assign** then **Review + assign**.
+[ ![A screenshot of the access control confirmation page.](images/tutorial-dynamic-targets-role-confirmation.png)](images/tutorial-dynamic-targets-role-confirmation.png#lightbox)
++
+## Run your experiment
+
+You're now ready to run your experiment!
+
+1. In **Chaos Studio**, navigate to the **Experiments** view, choose your experiment, and select **Start**.
+[ ![A screenshot of the Experiments view, with the Start button highlighted.](images/tutorial-dynamic-targets-start-experiment.png)](images/tutorial-dynamic-targets-start-experiment.png#lightbox)
+1. Select **OK** to confirm that you want to start the experiment.
+1. When the **Status** changes to **Running**, select **Details** for the latest run under **History** to see details for the running experiment. If any errors occur, you can view them within **Details** by selecting a failed Action and expanding **Failed targets**.
+
+To see the impact, use a tool such as **Azure Monitor** or the **Virtual Machine Scale Sets** section of the portal to check if your Virtual Machine Scale Sets targets are shut down. If they're shut down, check to see that the services running on your Virtual Machine Scale Sets are still running as expected.
+
+In this example, the chaos experiment successfully shut down the instance in Zone 1, as expected.
+[ ![A screenshot of the Virtual Machine Scale Sets resource page showing an instance in the Stopped state.](images/tutorial-dynamic-targets-view-vmss.png)](images/tutorial-dynamic-targets-view-vmss.png#lightbox)
+
+## Next steps
+
+> [!TIP]
+> If your Virtual Machine Scale Set uses an autoscale policy, the policy will provision new VMs after this experiment shuts down existing VMs. To prevent this, add a parallel branch in your experiment that includes the **Disable Autoscale** fault against the Virtual Machine Scale Set's `microsoft.insights/autoscaleSettings` resource. Remember to onboard the autoscaleSettings resource as a Target and assign the role.
+
+Now that you've run a dynamically targeted Virtual Machine Scale Sets shutdown experiment, you're ready to:
+- [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md)
+- [Manage your experiment](chaos-studio-run-experiment.md)
cognitive-services Firewalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/firewalls.md
Previously updated : 12/06/2021 Last updated : 04/19/2023
-# How to translate behind IP firewalls with Translator
+# Use Translator behind firewalls
-Translator can translate behind firewalls using either domain-name or IP filtering. Domain-name filtering is the preferred method. If you still require IP filtering, we suggest you to get the [IP addresses details using service tag](../../virtual-network/service-tags-overview.md#service-tags-on-premises). Translator is under the **CognitiveServicesManagement** service tag.
+Translator can translate behind firewalls using either [Domain-name](../../firewall/dns-settings.md#configure-dns-proxyazure-portal) or [IP filtering](#configure-firewall). Domain-name filtering is the preferred method.
-We **do not recommend** running Microsoft Translator from behind a specific IP filtered firewall. The setup is likely to break in the future without notice.
+If you still require IP filtering, you can get the [IP addresses details using service tag](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). Translator is under the **CognitiveServicesManagement** service tag.
+
+## Configure firewall
+
+ Navigate to your Translator resource in the Azure portal.
+
+1. Select **Networking** from the **Resource Management** section.
+1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="media/firewall-setting-azure-portal.png" alt-text="Screenshot of the firewall setting in the Azure portal.":::
+
+ > [!NOTE]
+ >
+ > * Once you enable **Selected Networks and Private Endpoints**, you must use the **Virtual Network** endpoint to call the Translator. You can't use the standard translator endpoint (`api.cognitive.microsofttranslator.com`) and you can't authenticate with an access token.
+ > * For more information, *see* [**Virtual Network Support**](reference/v3-0-reference.md#virtual-network-support).
+
+1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (`non-reserved`) addresses are accepted.
+
+Running Microsoft Translator from behind a specific IP filtered firewall is **not recommended**. The setup is likely to break in the future without notice.
The IP addresses for Translator geographical endpoints as of September 21, 2021 are:
The IP addresses for Translator geographical endpoints as of September 21, 2021
|United States|api-nam.cognitive.microsofttranslator.com|20.42.6.144, 20.49.96.128, 40.80.190.224, 40.64.128.192| |Europe|api-eur.cognitive.microsofttranslator.com|20.50.1.16, 20.38.87.129| |Asia Pacific|api-apc.cognitive.microsofttranslator.com|40.80.170.160, 20.43.132.96, 20.37.196.160, 20.43.66.16|+
+## Next steps
+
+[**Translator virtual network support**](reference/v3-0-reference.md#virtual-network-support)
+
+[**Configure virtual networks**](../cognitive-services-virtual-networks.md#grant-access-from-an-internet-ip-range)
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-reference.md
Previously updated : 12/06/2021 Last updated : 04/20/2023
Version 3 of the Translator provides a modern JSON-based Web API. It improves us
Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there's a datacenter failure when using the global endpoint, the request may be routed outside of the geography.
-To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
+To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
|Geography|Base URL (geographical endpoint)|Datacenters| |:--|:--|:--|
-|Global (non-regional)| api.cognitive.microsofttranslator.com|Closest available datacenter|
+|Global (`non-regional`)| api.cognitive.microsofttranslator.com|Closest available datacenter|
|Asia Pacific| api-apc.cognitive.microsofttranslator.com|Korea South, Japan East, Southeast Asia, and Australia East| |Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe| |United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2|
-<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
+<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
```curl // Pass secret key and region using headers to a custom endpoint curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.
-H "Content-Type: application/json" \ -d "[{'Text':'Hello'}]" -v ```
-<sup>2</sup> Custom Translator isn't currently available in Switzerland.
+<sup>`2`</sup> Custom Translator isn't currently available in Switzerland.
## Authentication
There are three headers that you can use to authenticate your subscription. This
|Authorization|*Use with Cognitive Services subscription if you're passing an authentication token.*<br/>The value is the Bearer token: `Bearer <token>`.| |Ocp-Apim-Subscription-Region|*Use with Cognitive Services multi-service and regional translator resource.*<br/>The value is the region of the multi-service or regional translator resource. This value is optional when using a global translator resource.|
-### Secret key
+### Secret key
+ The first option is to authenticate using the `Ocp-Apim-Subscription-Key` header. Add the `Ocp-Apim-Subscription-Key: <YOUR_SECRET_KEY>` header to your request. #### Authenticating with a global resource
An authentication token is valid for 10 minutes. The token should be reused when
|:--|:-| |Authorization| The value is an access **bearer token** generated by Azure AD.</br><ul><li> The bearer token provides proof of authentication and validates the client's authorization to use the resource.</li><li> An authentication token is valid for 10 minutes and should be reused when making multiple calls to Translator.</br></li>*See* [Sample request: 2. Get a token](../../authentication.md?tabs=powershell#sample-request)</ul>| |Ocp-Apim-Subscription-Region| The value is the region of the **translator resource**.</br><ul><li> This value is optional if the resource is global.</li></ul>|
-|Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You'll find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
+|Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
##### **Translator property pageΓÇöAzure portal**
Once you turn on this capability, you must use the custom endpoint to call the T
You can find the custom endpoint after you create a [translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) and allow access from selected networks and private endpoints.
+1. Navigate to your Translator resource in the Azure portal.
+1. Select **Networking** from the **Resource Management** section.
+1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="../media/virtual-network-setting-azure-portal.png" alt-text="Screenshot of the virtual network setting in the Azure portal.":::
+
+1. Select **Save** to apply your changes.
+1. Select **Keys and Endpoint** from the **Resource Management** section.
+1. Select the **Virtual Network** tab.
+1. Listed there are the endpoints for Text Translation and Document Translation.
+
+ :::image type="content" source="../media/virtual-network-endpoint.png" alt-text="Screenshot of the virtual network endpoint.":::
+ |Headers|Description| |:--|:-| |Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
curl -X POST "https://<your-custom-domain>.cognitiveservices.azure.com/translato
A standard error response is a JSON object with name/value pair named `error`. The value is also a JSON object with properties:
- * `code`: A server-defined error code.
- * `message`: A string giving a human-readable representation of the error.
+* `code`: A server-defined error code.
+* `message`: A string giving a human-readable representation of the error.
For example, a customer with a free trial subscription would receive the following error once the free quota is exhausted:
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
- Title: Custom Text Analytics for health data formats-
-description: Learn about the data formats accepted by custom text analytics for health.
------ Previously updated : 04/14/2023----
-# Accepted data formats in custom text analytics for health
-
-Use this article to learn about formatting your data to be imported into custom text analytics for health.
-
-If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
-
-Your Labels file should be in the `json` format below to be used when importing your labels into a project.
-
-```json
-{
- "projectFileVersion": "{API-VERSION}",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectName": "{PROJECT-NAME}",
- "projectKind": "CustomHealthcare",
- "description": "Trying out custom Text Analytics for health",
- "language": "{LANGUAGE-CODE}",
- "multilingual": true,
- "storageInputContainerName": "{CONTAINER-NAME}",
- "settings": {}
- },
- "assets": {
- "projectKind": "CustomHealthcare",
- "entities": [
- {
- "category": "Entity1",
- "compositionSetting": "{COMPOSITION-SETTING}",
- "list": {
- "sublists": [
- {
- "listKey": "One",
- "synonyms": [
- {
- "language": "en",
- "values": [
- "EntityNumberOne",
- "FirstEntity"
- ]
- }
- ]
- }
- ]
- }
- },
- {
- "category": "Entity2"
- },
- {
- "category": "MedicationName",
- "list": {
- "sublists": [
- {
- "listKey": "research drugs",
- "synonyms": [
- {
- "language": "en",
- "values": [
- "rdrug a",
- "rdrug b"
- ]
- }
- ]
-
- }
- ]
- }
- "prebuilts": "MedicationName"
- }
- ],
- "documents": [
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 500,
- "labels": [
- {
- "category": "Entity1",
- "offset": 25,
- "length": 10
- },
- {
- "category": "Entity2",
- "offset": 120,
- "length": 8
- }
- ]
- }
- ]
- },
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 100,
- "labels": [
- {
- "category": "Entity2",
- "offset": 20,
- "length": 5
- }
- ]
- }
- ]
- }
- ]
- }
-}
-
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
-|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
-| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
-| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
-|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
-| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
-|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
-| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
-|`synonyms`|`[]`|Array containing all the synonyms|synonym|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
-| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
-| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
-| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
-| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
-| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
-| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
-| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
-| `offset` | | The start position for the entity text. | `25`|
-| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
-
-## Entity naming rules
-
-1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
-2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
-3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
---
-## Next steps
-* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
-* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
-* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
- Title: Entity components in custom Text Analytics for health-
-description: Learn how custom Text Analytics for health extracts entities from text
------ Previously updated : 04/14/2023----
-# Entity components in custom text analytics for health
-
-In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
-
-## Component types
-
-An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
-
-The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
-
-### Learned component
-
-The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
-
-The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
--
-### List component
-
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
-
-In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
---
-### Prebuilt component
-
-The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
---
-## Entity options
-
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
-
-### Combine components
-
-Combine components as one entity when they overlap by taking the union of all the components.
-
-Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
--
-By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
--
-Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
--
-With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
---
-### Don't combine components
-
-Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
--
-When you do not combine components, the entity will return twice:
---
-## How to use components and options
-
-Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-
-A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
-
-Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-
-When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
--
-## Next steps
-
-* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
- Title: Custom text analytics for health evaluation metrics-
-description: Learn about evaluation metrics in custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Evaluation metrics for custom Text Analytics for health models
-
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
-
->[!NOTE]
-> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
-
-## Model-level and entity-level evaluation metrics
-
-Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
-
-The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
-
-### Example
-
-*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
-
-The model extracting entities from this text could have the following predictions:
-
-| Entity | Predicted as | Actual type |
-|--|--|--|
-| John Smith | Person | Person |
-| Frederick | Person | City |
-| Forrest | City | Person |
-| Fannie Thomas | Person | Person |
-| Colorado Springs | City | City |
-
-### Entity-level evaluation for the *person* entity
-
-The model would have the following entity-level evaluation, for the *person* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
-| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-
-* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
-
-### Entity-level evaluation for the *city* entity
-
-The model would have the following entity-level evaluation, for the *city* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
-| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Model-level evaluation for the collective model
-
-The model would have the following evaluation for the model in its entirety:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
-| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
-| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
-
-## Interpreting entity-level evaluation metrics
-
-So what does it actually mean to have high precision or high recall for a certain entity?
-
-| Recall | Precision | Interpretation |
-|--|--|--|
-| High | High | This entity is handled well by the model. |
-| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
-| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
-| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
-
-## Guidance
-
-After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
-
-* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
-
-* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
-
-* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
-
-* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
-
-* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
--
-## Confusion matrix
-
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
-The matrix compares the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
-
-The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
--
-You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
-
-* The values in the diagonal are the *True Positive* values of each entity.
-* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
-
-Similarly,
-
-* The *true positive* of the model is the sum of *true Positives* for all entities.
-* The *false positive* of the model is the sum of *false positives* for all entities.
-* The *false Negative* of the model is the sum of *false negatives* for all entities.
-
-## Next steps
-
-* [Custom text analytics for health overview](../overview.md)
-* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
- Title: Send a custom Text Analytics for health request to your custom model
-description: Learn how to send a request for custom text analytics for health.
------- Previously updated : 04/14/2023----
-# Send queries to your custom Text Analytics for health model
-
-After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
-You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
-
-## Test deployed model
-
-You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
--
-## Send a custom text analytics for health request to your model
-
-# [Language Studio](#tab/language-studio)
--
-# [REST API](#tab/rest-api)
-
-First you will need to get your resource key and endpoint:
--
-### Submit a custom Text Analytics for health task
--
-### Get task results
-----
-## Next steps
-
-* [Custom text analytics for health](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
- Title: Using Azure resources in custom Text Analytics for health-
-description: Learn about the steps for using Azure resources with custom text analytics for health.
------ Previously updated : 04/14/2023----
-# How to create custom Text Analytics for health project
-
-Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
-
-## Prerequisites
-
-Before you start using custom text analytics for health, you need:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Create a Language resource
-
-Before you start using custom text analytics for health, you'll need an Azure Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
-
-You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
-
-## Create Language resource and connect storage account
-
-You can create a resource in the following ways:
-
-* The Azure portal
-* Language Studio
-* PowerShell
-
-> [!Note]
-> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
-----
-> [!NOTE]
-> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
-> * You can only connect your language resource to one storage account.
-
-## Using a pre-existing Language resource
--
-## Create a custom Text Analytics for health project
-
-Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Import project
-
-If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Get project details
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Delete project
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
-
-* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
- Title: Deploy a custom Text Analytics for health model-
-description: Learn about deploying a model for custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Deploy a custom text analytics for health model
-
-Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-
-For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
-
-## Deploy model
-
-After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
- Title: Preparing data and designing a schema for custom Text Analytics for health-
-description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
------ Previously updated : 04/14/2023----
-# How to prepare data and define a schema for custom Text Analytics for health
-
-In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
-
-## Schema design
-
-Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
-
-* Review documents in your dataset to be familiar with their format and structure.
-
-* Identify the entities you want to extract from the data.
-
- For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
-
-* Avoid entity types ambiguity.
-
- **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
-
- For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
-
-* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
-
- For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
--
-## Add entities
-
-To add entities to your project:
-
-1. Move to **Entities** pivot from the top of the page.
-
-2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
-
-3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
-
-4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
-
-5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
-
-
-### Add list component
-
-To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
-
-1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
-
-2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
-
- <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
-
-### Define entity options
-
-Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
-
- <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
--
-After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
--
-## Data selection
-
-The quality of data you train your model with affects model performance greatly.
-
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
-
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
-
-* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-
-* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
-
-> [!NOTE]
-> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
-
-## Data preparation
-
-As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-
-* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
-
-You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
-
-## Test set
-
-When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
-
-## Next steps
-
-If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
- Title: Back up and recover your custom Text Analytics for health models-
-description: Learn how to save and recover your custom Text Analytics for health models.
------ Previously updated : 04/14/2023----
-# Back up and recover your custom Text Analytics for health models
-
-When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
-
-If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
-
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
--
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
---
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get training status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
-
- [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
--
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
-
-* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
- Title: How to label your data for custom Text Analytics for health-
-description: Learn how to label your data for use with custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Label your data using the Language Studio
-
-Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
-
-## Prerequisites
-
-Before you can label your data, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data labeling guidelines
-
-After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
-
-As you label your data, keep in mind:
-
-* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
-
-If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
-
-* In general, more labeled data leads to better results, provided the data is labeled accurately.
-
-* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the documents.
- * **Label completely**: Label all the instances of the entity in all your documents.
-
- > [!NOTE]
- > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
-
-## Label your data
-
-Use the following steps to label your data:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
-
- <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
-
- >[!TIP]
- > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
- > You can also use the filters to view the documents that are labeled with a specific entity type.
-
-3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
-
- > [!NOTE]
- > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
-
-4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
-
- <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
-
-5. You have two options to label your document:
-
- |Option |Description |
- |||
- |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
- |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
-
- The below screenshot shows labeling using a brush.
-
- :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
-
-6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
-
-7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
-
- > [!TIP]
- > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
-
-7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
- * *Total instances* where you can view count of all labeled instances of a specific entity type.
- * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
-
-7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
-
-## Remove labels
-
-To remove a label
-
-1. Select the entity you want to remove a label from.
-2. Scroll through the menu that appears, and select **Remove label**.
-
-## Delete entities
-
-You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
-
-## Next steps
-
-After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
- Title: How to train your custom Text Analytics for health model-
-description: Learn about how to train your model for custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Train your custom Text Analytics for health model
-
-Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
-
-To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
--
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data splitting
-
-Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
-The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
-After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
-
-Custom Text Analytics for health supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
-
-## Train model
-
-# [Language studio](#tab/Language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-### Start training job
--
-### Get training job status
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
-
- [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
---
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
- Title: Evaluate a Custom Text Analytics for health model-
-description: Learn how to evaluate and score your Custom Text Analytics for health model
------ Previously updated : 04/14/2023-----
-# View a custom text analytics for health model's evaluation and details
-
-After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
-
-## Prerequisites
-
-Before viewing model evaluation, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md)
-* A [successfully trained model](train-model.md)
--
-## Model details
-
-There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* [Deploy your model](deploy-model.md)
-* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/language-support.md
- Title: Language and region support for custom Text Analytics for health-
-description: Learn about the languages and regions supported by custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Language support for custom text analytics for health
-
-Use this article to learn about the languages currently supported by custom Text Analytics for health.
-
-## Multilingual option
-
-With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
-
-You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
-
-## Language support
-
-Custom Text Analytics for health supports `.txt` files in the following languages:
-
-| Language | Language code |
-| | |
-| English | `en` |
-| French | `fr` |
-| German | `de` |
-| Spanish | `es` |
-| Italian | `it` |
-| Portuguese (Portugal) | `pt-pt` |
-| Hebrew | `he` |
--
-## Next steps
-
-* [Custom Text Analytics for health overview](overview.md)
-* [Service limits](reference/service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/overview.md
- Title: Custom Text Analytics for health - Azure Cognitive Services-
-description: Customize an AI model to label and extract healthcare information from documents using Azure Cognitive Services.
------ Previously updated : 04/14/2023----
-# What is custom Text Analytics for health?
-
-Custom Text Analytics for health is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
-
-Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
-
-## Example usage scenarios
-
-Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
--
-## Project development lifecycle
-
-Using custom Text Analytics for health typically involves several different steps.
--
-* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
-
-* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the files.
- * **Label completely**: Label all the instances of the entity in all your files.
-
-* **Train the model**: Your model starts learning from your labeled data.
-
-* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
-
-* **Deploy the model**: Deploying a model makes it available for use via an API.
-
-* **Extract entities**: Use your custom models for entity extraction tasks.
-
-## Reference documentation and code samples
-
-As you use custom Text Analytics for health, see the following reference documentation for Azure Cognitive Services for Language:
-
-|APIs| Reference documentation|
-||||
-|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
-|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
--
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
---
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
-
-* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/quickstart.md
- Title: Quickstart - Custom Text Analytics for health (Custom TA4H)-
-description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
------ Previously updated : 04/14/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: custom Text Analytics for health
-
-Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
-
-In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
-------
-## Next steps
-
-* [Text analytics for health overview](./overview.md)
-
-After you've created entity extraction model, you can:
-
-* [Use the runtime API to extract entities](how-to/call-api.md)
-
-When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
-
-* [Data selection and schema design](how-to/design-schema.md)
-* [Tag data](how-to/label-data.md)
-* [Train a model](how-to/train-model.md)
-* [Model evaluation](how-to/view-model-evaluation.md)
-
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/glossary.md
- Title: Definitions used in custom Text Analytics for health-
-description: Learn about definitions used in custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Terms and definitions used in custom Text Analytics for health
-
-Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
-
-## Entity
-Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Prebuilt entity component
-
-Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
--
-## Learned entity component
-
-The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
-
-## List entity component
-A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
-
-The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
-
-## Model
-A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
-
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Overfitting
-
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
-## Project
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-
-## Recall
-Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
--
-## Schema
-Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
-
-## Training data
-Training data is the set of information that is needed to train a model.
--
-## Next steps
-
-* [Data and service limits](service-limits.md).
-
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
- Title: Custom Text Analytics for health service limits-
-description: Learn about the data and service limits when using Custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Custom Text Analytics for health service limits
-
-Use this article to learn about the data and service limits when using custom Text Analytics for health.
-
-## Language resource limits
-
-* Your Language resource has to be created in one of the [supported regions](#regional-availability).
-
-* Your resource must be one of the supported pricing tiers:
-
- |Tier|Description|Limit|
- |--|--|--|
- |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
-
-
-* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
-
-* You can have up to 500 projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-## Regional availability
-
-Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| East US | Γ£ô | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
-
-> [!TIP]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|S|Training time| Unlimited, free |
-|S|Prediction Calls| 5,000 text records for free per language resource|
-
-## Document limits
-
-* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-
-* All files uploaded in your container must contain data. Empty files are not allowed for training.
-
-* All files should be available at the root of your container.
-
-## Data limits
-
-The following limits are observed for authoring.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Documents count | 10 | 100,000 |
-|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
-|Count of entity types | 1 | 200 |
-|Entity length in characters | 1 | 500 |
-|Count of trained models per project| 0 | 10 |
-|Count of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Item | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
-| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
--
-## Next steps
-
-* [Custom text analytics for health overview](../overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
The Language service also provides several new features as well, which can eithe
:::column-end::: :::row-end:::
-### Custom text analytics for health
-
- :::column span="":::
- :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
- :::column-end:::
- :::column span="":::
- [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
- :::column-end:::
## Which Language service feature should I use?
This section will help you decide which Language service feature you should use
| Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | | | Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô| | Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
-| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô | | Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | | | Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## April 2023
-* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md). * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-fine-tune-002<sup>1</sup> | N/A | Currently unavailable | | |
-| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US | N/A | 4,096 | Sep 2021 |
+| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model. <br><sup>2</sup> East US was previously available, but due to high demand this region is currently unavailable for new customers to use for fine-tuning. Please use the South Central US, and West Europe regions for fine-tuning.
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md
az keyvault key delete \
### Delete fine-tuned models and deployments
-The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
+The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest (either with Microsoft-managed keys or customer-managed keys) and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
## Disable customer-managed keys
When you previously enabled customer managed keys this also enabled a system ass
## Next steps * [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
response = openai.Embedding.create(
engine="YOUR_DEPLOYMENT_NAME" ) embeddings = response['data'][0]['embedding']
+print(embeddings)
```
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--|
-| OpenAI resources per region | 2 |
+| OpenAI resources per region within Azure subscription | 2 |
| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 | | Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 |
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
When Teams external users leave the meeting, or the meeting ends, they can no lo
*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md)
-**Inline image support is currently in public preview and is available in the Chat SDK for JavaScript only. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review [Supplemental Terms of Use for Microsoft Azure Previews.](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+**Inline images are images that are copied and pasted directly into the send box of Teams client. For images that were uploaded via "Upload from this device" menu or via drag-and-drop (such as dragging images directly to the send box) in the Teams, they are not supported at this moment. To copy an image, the Teams user can either use their operating system's context menu to copy the image file then paste it into the send box of their Teams client, or use keyboard shortcuts instead.
-**If the Teams external user sends a message with images uploaded via "Upload from this device" menu or via drag-and-drop (such as dragging images directly to the send box) in the Teams, then these scenarios would be covered under the file sharing capability, which is currently not supported.
+**Inline image support is currently in public preview and is available in the Chat SDK for JavaScript only. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review [Supplemental Terms of Use for Microsoft Azure Previews.](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
## Server capabilities
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
Last updated 03/01/2023 - # Spotlight states
In this article, you'll learn how to implement Microsoft Teams spotlight capabil
Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Communication Services or Microsoft 365 users can call the spotlight APIs based
| stopAllSpotlight | ✔️ | ✔️ | | | getSpotlightedParticipants | ✔️ | ✔️ | ✔️ | ## Next steps - [Learn how to manage calls](./manage-calls.md)
communication-services Meeting Interop Features Inline Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-inline-image.md
## Add inline image support The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive inline images sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript.
+The Chat SDK for JavaScript provides `previewUrl` and `url` for each inline images. Please note that some GIF images fetched from `previewUrl` might not be animated and a static preview image would be returned instead. Developers are expected to use the `url` if the intention is to fetch animated images only.
+ [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] [!INCLUDE [Teams Inline Image Interop with JavaScript SDK](./includes/meeting-interop-features-inline-image-javascript.md)]
communication-services Proxy Calling Support Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md
+
+ Title: Tutorial - Proxy your ACS calling traffic across your own servers
+
+description: Learn how to have your media and signaling traffic be proxied to servers that you can control.
++++ Last updated : 04/20/2023+++++
+# How to force calling traffic to be proxied across your own server
+
+In certain situations, it might be useful to have all your client traffic proxied to a server that you can control. When the SDK is initializing, you can provide the details of your servers that you would like the traffic to route to. Once enabled all the media traffic (audio/video/screen sharing) travel through the provided TURN servers instead of the Azure Communication Services defaults. This tutorial guides on how to have WebJS SDK calling traffic be proxied to servers that you control.
+
+>[!IMPORTANT]
+> The proxy feature is available starting in the public preview version [1.13.0-beta.4](https://www.npmjs.com/package/@azure/communication-calling/v/1.13.0-beta.4) of the Calling SDK. Please ensure that you use this or a newer SDK when trying to use this feature. This Quickstart uses the Azure Communication Services Calling SDK version greater than `1.13.0`.
+
+## Proxy calling media traffic
+
+## What is a TURN server?
+Many times, establishing a network connection between two peers isn't straightforward. A direct connection might not work because of many reasons: firewalls with strict rules, peers sitting behind a private network, or computers are running in a NAT environment. To solve these network connection issues, you can use a TURN server. The term stands for Traversal Using Relays around NAT, and it's a protocol for relaying network traffic STUN and TURN servers are the relay servers here. Learn more about how ACS [mitigates](../concepts/network-traversal.md) network challenges by utilizing STUN and TURN.
+
+### Provide your TURN servers details to the SDK
+To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video.
+
+```js
+import { CallClient } from '@azure/communication-calling';
+
+const myTurn1 = {
+ urls: [
+ 'turn:turn.azure.com:3478?transport=udp',
+ 'turn:turn1.azure.com:3478?transport=udp',
+ ],
+ username: 'turnserver1username',
+ credential: 'turnserver1credentialorpass'
+};
+
+const myTurn2 = {
+ urls: [
+ 'turn:20.202.255.255:3478',
+ 'turn:20.202.255.255:3478?transport=tcp',
+ ],
+ username: 'turnserver2username',
+ credential: 'turnserver2credentialorpass'
+};
+
+// While you are creating an instance of the CallClient (the entry point of the SDK):
+const callClient = new CallClient({
+ networkConfiguration: {
+ turn: {
+ iceServers: [
+ myTurn1,
+ myTurn2
+ ]
+ }
+ }
+});
++++
+// ...continue normally with your SDK setup and usage.
+```
+
+> [!IMPORTANT]
+> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type).
+
+> [!NOTE]
+> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default will behaviour will be UDP.
+
+> [!NOTE]
+> If any of the URLs provided are invalid or don't have one of these schemas - 'turn:', 'turns:', 'stun:', the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown should help you troubleshoot if you run into issues.
+
+The API reference for the `CallClientOptions` object, and the `networkConfiguration` property within it can be found here - [CallClientOptions](/javascript/api/azure-communication-services/@azure/communication-calling/callclientoptions?view=azure-communication-services-js&preserve-view=true).
+
+### Set up a TURN server in Azure
+You can create a Linux virtual machine in the Azure portal using this [guide](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu), and deploy a TURN server using [coturn](https://github.com/coturn/coturn), a free and open source implementation of a TURN and STUN server for VoIP and WebRTC.
+
+Once you have setup a TURN server, you can test it using the WebRTC Trickle ICE page - [Trickle ICE](https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/).
+
+## Proxy signaling traffic
+
+To provide the URL of a proxy server, you need to pass it in as part of `CallClientOptions` while initializing the `CallClient`. For more details how to setup a call see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video.
+
+```js
+import { CallClient } from '@azure/communication-calling';
+
+// While you are creating an instance of the CallClient (the entry point of the SDK):
+const callClient = new CallClient({
+ networkConfiguration: {
+ proxy: {
+ url: 'https://myproxyserver.com'
+ }
+ }
+});
+
+// ...continue normally with your SDK setup and usage.
+```
+
+> [!NOTE]
+> If the proxy URL provided is an invalid URL, the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown will help you troubleshoot if you run into issues.
+
+The API reference for the `CallClientOptions` object, and the `networkConfiguration` property within it can be found here - [CallClientOptions](/javascript/api/azure-communication-services/@azure/communication-calling/callclientoptions?view=azure-communication-services-js&preserve-view=true).
+
+### Setting up a signaling proxy middleware in express js
+
+You can also create a proxy middleware in your express js server setup to have all the URLs redirected through it, using the [http-proxy-middleware](https://www.npmjs.com/package/http-proxy-middleware) npm package.
+The `createProxyMiddleware` function from that package should cover what you need for a simple redirect proxy setup. Here's an example usage of it with some option settings that the SDK needs to have all of our URLs working as expected:
+
+```js
+const proxyRouter = (req) => {
+ // Your router function if you don't intend to setup a direct target
+
+ // An example:
+ if (!req.originalUrl && !req.url) {
+ return '';
+ }
+
+ const incomingUrl = req.originalUrl || req.url;
+ if (incomingUrl.includes('/proxy')) {
+ return 'https://microsoft.com/forwarder/';
+ }
+
+ return incomingUrl;
+}
+
+const myProxyMiddleware = createProxyMiddleware({
+ target: 'https://microsoft.com', // This will be ignore if a router function is provided, but createProxyMiddleware still requires this to be passed in (see it's official docs on the npm page for the most recent changes)
+ router: proxyRouter,
+ changeOrigin: true,
+ secure: false, // If you have proper SSL setup, set this accordingly
+ followRedirects: true,
+ ignorePath: true,
+ ws: true,
+ logLevel: 'debug'
+});
+
+// And finally pass in your proxy middleware to your express app depending on your URL/host setup
+app.use('/proxy', myProxyMiddleware);
+```
+
+> [!Tip]
+> If you are having SSL issues, check out the [cors](https://www.npmjs.com/package/cors) package.
+
+### Setting up a signaling proxy server on Azure
+You can create a Linux virtual machine in the Azure portal and deploy an NGINX server on it using this guide - [Quickstart: Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu).
+
+Here's an NGINX config that you could make use of for a quick spin up:
+```
+events {
+ multi_accept on;
+ worker_connections 65535;
+}
+
+http {
+ map $http_upgrade $connection_upgrade {
+ default upgrade;
+ '' close;
+ }
+
+ server {
+ listen <port_you_want_listen_on> ssl;
+ ssl_certificate <path_to_your_ssl_cert>;
+ ssl_certificate_key <path_to_your_ssl_key>;
+ location ~* ^/(.*\.(com)(?::[\d]+)?)/(.*)$ {
+ resolver 8.8.8.8;
+ set $ups_host $1;
+ set $r_uri $3;
+ rewrite ^/.*$ /$r_uri break;
+ proxy_set_header Host $ups_host;
+ proxy_ssl_server_name on;
+ proxy_ssl_protocols TLSv1.2;
+ proxy_ssl_ciphers DEFAULT;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_pass_header Authorization;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection $connection_upgrade;
+ proxy_set_header Proxy "";
+ proxy_pass https://$ups_host;
+ proxy_redirect https://$ups_host https://$host/$ups_host;
+ proxy_intercept_errors on;
+ error_page 301 302 307 = @process_redirect;
+ error_page 400 405 = @process_error_response;
+ if ($request_method = 'OPTIONS') {
+ add_header Access-Control-Allow-Origin * always;
+ }
+ }
+ location @handle_redirect {
+ set $saved_redirect_location '$upstream_http_location';
+ resolver 8.8.8.8;
+ proxy_pass $saved_redirect_location;
+ add_header X-DBUG-MSG "301" always;
+ }
+ location @handle_error_response {
+ add_header Access-Control-Allow-Origin * always;
+ }
+ }
+}
+```
++
container-instances Confidential Containers Attestation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/confidential-containers-attestation-concepts.md
+
+ Title: Attestation in Confidential containers on Azure Containers Instances
+description: full attestation of container groups in confidential containers on Azure Container Instances
+++++ Last updated : 04/20/2023++
+# What is attestation?
+
+Attestation is an essential part of confidential computing and appears in the definition by the Confidential Computing Consortium ΓÇ£Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment."
+
+According to the [Remote ATtestation procedureS (RATS) Architecture](https://www.ietf.org/rfc/rfc9334.html) In remote attestation, ΓÇ£one peer (the "Attester") produces believable information about itself ("Evidence") to enable a remote peer (the "Relying Party") to decide whether to consider that Attester a trustworthy peer. Remote attestation procedures are facilitated by an additional vital party (the "Verifier").ΓÇ¥ In simpler terms, attestation is a way of proving that a computer system is trustworthy.
+
+In Confidential Containers on ACI you can use an attestation token to verify that the container group
+
+- Is running on confidential computing hardware. In this case AMD SEV-SNP.
+- Is running on an Azure compliant utility VM.
+- Is enforcing the expected confidential computing enforcement policy (cce) that was generated using [tooling](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md).
+
+## Full attestation in confidential containers on Azure Container Instances
+
+Expanding upon this concept of attestation. Full attestation captures all the components that are part of the Trusted Execution Environment that is remotely verifiable. To achieve full attestation, in Confidential Containers, we have introduced the notion of a cce policy, which defines a set of rules, which is enforced in the utility VM. The security policy is encoded in the attestation report as an SHA-256 digest stored in the HostData attribute, as provided to the PSP by the host operating system during the VM boot-up. This means that the security policy enforced by the utility VM is immutable throughout the lifetime of the utility VM.
+
+The exhaustive list of attributes that are part of the SEV-SNP attestation can be found [here](https://www.amd.com/system/files/TechDocs/SEV-SNP%20PSP%20API%20Specification.pdf).
+
+Some important fields to consider in an attestation token returned by [Microsoft Azure Attestation ( MAA )](../attestation/overview.md)
+
+| Claim | Sample value | Description |
+||-|-|
+| x-ms-attestation-type | sevsnpvm | String value that describes the attestation type. For example, in this scenario sevsnp hardware |
+| x-ms-compliance-status | azure-compliant-uvm | Compliance status of the utility VM that runs the container group. |
+| x-ms-sevsnpvm-hostdata | 670fff86714a650a49b58fadc1e90fedae0eb32dd51e34931c1e7a1839c08f6f | Hash of the cce policy that was generated during deployment. |
+| x-ms-sevsnpvm-is-debuggable | false | Flag to indicate whether the underlying hardware is running in debug mode |
+
+## Sample attestation token generated by MAA
+
+```json
+{
+ "header": {
+ "alg": "RS256",
+ "jku": "https://sharedeus2.eus2.test.attest.azure.net/certs",
+ "kid": "3bdCYJabzfhISFtb3J8yuEESZwufV7hhh08N3ZflAuE=",
+ "typ": "JWT"
+ },
+ "payload": {
+ "exp": 1680259997,
+ "iat": 1680231197,
+ "iss": "https://sharedeus2.eus2.test.attest.azure.net",
+ "jti": "d288fef5880b1501ea70be1b9366840fd56f74e666a23224d6de113133cbd8d5",
+ "nbf": 1680231197,
+ "nonce": "3413764049005270139",
+ "x-ms-attestation-type": "sevsnpvm",
+ "x-ms-compliance-status": "azure-compliant-uvm",
+ "x-ms-policy-hash": "9NY0VnTQ-IiBriBplVUpFbczcDaEBUwsiFYAzHu_gco",
+ "x-ms-runtime": {
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "Nvhfuq2cCIOAB8XR4Xi9Pr0NP_9CeMzWQGtW_HALz_w",
+ "kty": "RSA",
+ "n": "v965SRmyp8zbG5eNFuDCmmiSeaHpujG2bC_keLSuzvDMLO1WyrUJveaa5bzMoO0pA46pXkmbqHisozVzpiNDLCo6d3z4TrGMeFPf2APIMu-RSrzN56qvHVyIr5caWfHWk-FMRDwAefyNYRHkdYYkgmFK44hhUdtlCAKEv5UQpFZjvh4iI9jVBdGYMyBaKQLhjI5WIh-QG6Za5sSuOCFMnmuyuvN5DflpLFz595Ss-EoBIY-Nil6lCtvcGgR-IbjUYHAOs5ajamTzgeO8kx3VCE9HcyKmyUZsiyiF6IDRp2Bpy3NHTjIz7tmkpTHx7tHnRtlfE2FUv0B6i_QYl_ZA5Q"
+ }
+ ]
+ },
+ "x-ms-sevsnpvm-authorkeydigest": "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ "x-ms-sevsnpvm-bootloader-svn": 3,
+ "x-ms-sevsnpvm-familyId": "01000000000000000000000000000000",
+ "x-ms-sevsnpvm-guestsvn": 2,
+ "x-ms-sevsnpvm-hostdata": "670fff86714a650a49b58fadc1e90fedae0eb32dd51e34931c1e7a1839c08f6f",
+ "x-ms-sevsnpvm-idkeydigest": "cf7e12541981e6cafd150b5236785f4364850e2c4963825f9ab1d8091040aea0964bb9a8835f966bdc174d9ad53b4582",
+ "x-ms-sevsnpvm-imageId": "02000000000000000000000000000000",
+ "x-ms-sevsnpvm-is-debuggable": false,
+ "x-ms-sevsnpvm-launchmeasurement": "a1e1a4b64e8de5c664ceee069010441f74cf039065b5b847e82b9d1a7629aaf33d5591c6b18cee48a4dde481aa88d0fb",
+ "x-ms-sevsnpvm-microcode-svn": 115,
+ "x-ms-sevsnpvm-migration-allowed": false,
+ "x-ms-sevsnpvm-reportdata": "7ab000a323b3c873f5b81bbe584e7c1a26bcf40dc27e00f8e0d144b1ed2d14f10000000000000000000000000000000000000000000000000000000000000000",
+ "x-ms-sevsnpvm-reportid": "a489c8578fb2f54d895fc8d000a85b2ff4855c015e4fb7216495c4dba4598345",
+ "x-ms-sevsnpvm-smt-allowed": true,
+ "x-ms-sevsnpvm-snpfw-svn": 8,
+ "x-ms-sevsnpvm-tee-svn": 0,
+ "x-ms-sevsnpvm-uvm-endorsement": {
+ "x-ms-sevsnpvm-guestsvn": "100",
+ "x-ms-sevsnpvm-launchmeasurement": "a1e1a4b64e8de5c664ceee069010441f74cf039065b5b847e82b9d1a7629aaf33d5591c6b18cee48a4dde481aa88d0fb"
+ },
+ "x-ms-sevsnpvm-vmpl": 0,
+ "x-ms-ver": "1.0"
+ }
+}
+```
+## Generating an attestation token
+
+We have open-sourced sidecar container implementations that provide an easy rest interface to get a raw SNP (Secure Nested Paging) report produced by the hardware or a MAA token. The sidecar is available at this [repository](https://github.com/microsoft/confidential-sidecar-containers) and can be deployed with your container group.
+
+## Next steps
+
+- [Learn how to use attestation to release a secret to your container group](../confidential-computing/skr-flow-confidential-containers-azure-container-instance.md)
+- [Deploy a confidential container group with Azure Resource Manager](./container-instances-tutorial-deploy-confidential-containers-cce-arm.md)
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md
TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query access
Then, run `docker login`, passing `00000000-0000-0000-0000-000000000000` as the username and using the access token as password: ```console
-docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password $TOKEN
+ΓÇó docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN
``` Likewise, you can use the token returned by `az acr login` with the `helm registry login` command to authenticate with the registry:
container-registry Container Registry Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md
As mentioned in the [Manifest digest](container-registry-concepts.md#manifest-di
```output [ {
- "digest": "sha256:7ca0e0ae50c95155dbb0e380f37d7471e98d2232ed9e31eece9f9fb9078f2728",
- "tags": [
- "latest"
- ],
- "timestamp": "2018-07-11T21:38:35.9170967Z"
- },
- {
- "digest": "sha256:d2bdc0c22d78cde155f53b4092111d7e13fe28ebf87a945f94b19c248000ceec",
- "tags": [],
- "timestamp": "2018-07-11T21:32:21.1400513Z"
+ "architecture": "amd64",
+ "changeableAttributes": {
+ "deleteEnabled": true,
+ "listEnabled": true,
+ "quarantineDetails": "{\"state\":\"Scan Passed\",\"link\":\"https://aka.ms/test\",\"scanner\":\"Azure Security Monitoring-Qualys Scanner\",\"result\":{\"version\":\"2020-05-13T00:23:31.954Z\",\"summary\":[{\"severity\":\"High\",\"count\":2},{\"severity\":\"Medium\",\"count\":0},{\"severity\":\"Low\",\"count\":0}]}}",
+ "quarantineState": "Passed",
+ "readEnabled": true,
+ "writeEnabled": true
+ },
+ "configMediaType": "application/vnd.docker.container.image.v1+json",
+ "createdTime": "2020-05-16T04:25:14.3112885Z",
+ "digest": "sha256:eef2ef471f9f9d01fd2ed81bd2492ddcbc0f281b0a6e4edb700fbf9025448388",
+ "imageSize": 22906605,
+ "lastUpdateTime": "2020-05-16T04:25:14.3112885Z",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "os": "linux",
+ "timestamp": "2020-05-16T04:25:14.3112885Z"
} ] ```
-As you can see in the output of the last step in the sequence, there is now an orphaned manifest whose `"tags"` property is an empty list. This manifest still exists within the registry, along with any unique layer data that it references. **To delete such orphaned images and their layer data, you must delete by manifest digest**.
+The tags array is removed from meta-data when an image is **untagged**. This manifest still exists within the registry, along with any unique layer data that it references. **To delete such orphaned images and their layer data, you must delete by manifest digest**.
## Automatically purge tags and manifests
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
In addition to providing incremental data feed from analytical store to diverse
- There's no limitation around the fixed data retention period for which changes are available > [!IMPORTANT]
-> Please note that "from the beginning" means that all data and all transactions since the container creation are availble for CDC, including deletes and updates. To ingest and process deletes and updates, you have to use specific settings in your CDC processes in Azure Synapse or Azure Data Factory. These settings are turned off by default. For more information, click [here](get-started-change-data-capture.md)
+> Please note that if the "Start from beginning" option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. Similarly, when the "Start from timestamp" option is selected, the initial load processes the data from the given timestamp, and incremental or changed data is captured in subsequent runs. The `Capture intermediate updates`, `Capture Deletes` and `Capture Transactional store TTLs`, which are found under the [source options](get-started-change-data-capture.md) tab, determine if intermediate updates and deletes are captured in sinks.
## Features
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Previously updated : 02/27/2023 Last updated : 04/15/2023
Depending on the current RU/s provisioned and resource settings, each resource c
┬╣ Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
-## Control plane operations
+## Control plane
-You can [create and manage your Azure Cosmos DB account](how-to-manage-database-account.md) using the Azure portal, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. The following table lists the limits per subscription, account, and number of operations.
+Azure Cosmos DB maintains a resource provider that offers a management layer to create, update, and delete resources in your Azure Cosmos DB account. The resource provider interfaces with the overall Azure Resource Management layer, which is the deployment and management service for Azure. You can [create and manage Azure Cosmos DB resources](how-to-manage-database-account.md) using the Azure portal, Azure PowerShell, Azure CLI, Azure Resource Manager and Bicep templates, Rest API, Azure Management SDKs as well as 3rd party tools such as Terraform and Pulumi.
+
+This management layer can also be accessed from the Azure Cosmos DB data plane SDKs used in your applications to create and manage resources within an account. Data plane SDKs also make control plane requests during initial connection to the service to do things like enumerating databases and containers, as well as requesting account keys for authentication.
+
+Each account for Azure Cosmos DB has a `master partition` which contains all of the metadata for an account. It also has a small amount of throughput to support control plane operations. Control plane requests that create, read, update or delete this metadata consumes this throughput. When the amount of throughput consumed by control plane operations exceeds this amount, operations are rate-limited, same as data plane operations within Azure Cosmos DB. However, unlike throughput for data operations, throughput for the master partition cannot be increased.
+
+Some control plane operations do not consume master partition throughput, such as Get or List Keys. However, unlike requests on data within your Azure Cosmos DB account, resource providers within Azure are not designed for high request volumes. **Control plane operations that exceed the documented limits at sustained levels over consecutive 5-minute periods here may experience request throttling as well failed or incomplete operations on Azure Cosmos DB resources**.
+
+Control plane operations can be monitored by navigating the Insights tab for an Azure Cosmos DB account. To learn more see [Monitor Control Plane Requests](use-metrics.md#monitor-control-plane-requests). Users can also customize these, use Azure Monitor and create a workbook to monitor [Metadata Requests](monitor-reference.md#request-metrics) and set alerts on them.
+
+### Resource limits
+
+The following table lists resource limits per subscription or account.
| Resource | Limit | | | | | Maximum number of accounts per subscription | 50 by default. ┬╣ |
-| Maximum number of regional failovers | 10/hour by default. ┬╣ ┬▓ |
+| Maximum number of databases & containers per account | 500 ┬▓ |
+| Maximum throughput supported by an account for metadata operations | 240 RU/s |
┬╣ You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md) up to 1,000 max.
+┬▓ This limit cannot be increased. Total count of both with an account. (1 database and 499 containers, 250 databases and 250 containers, etc.)
+
+### Request limits
+
+The following table lists request limits per 5 minute interval, per account, unless otherwise specified.
+
+| Operation | Limit |
+| | |
+| Maximum List or Get Keys | 500 ┬╣ |
+| Maximum Create database & container | 500 |
+| Maximum Get or List database & container | 500 ┬╣ |
+| Maximum Update provisioned throughput | 25 |
+| Maximum regional failover | 10 (per hour) ┬▓ |
+| Maximum number of all operations (PUT, POST, PATCH, DELETE, GET) not defined above | 500 |
-┬▓ Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or have any limits on changing the write region.
+┬╣ Users should use [singleton client](nosql/best-practice-dotnet.md#checklist) for SDK instances and cache keys and database and container references between requests for the lifetime of that instance.
+┬▓ Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or allow changing the write region.
Azure Cosmos DB automatically takes backups of your data at regular intervals. For details on backup retention intervals and windows, see [Online backup and on-demand data restore in Azure Cosmos DB](online-backup-and-restore.md).
Here's a list of limits per account.
| Resource | Limit | | | |
-| Maximum number of databases per account | 500 |
-| Maximum number of containers per database with shared throughput |25 |
-| Maximum number of containers per account | 500 |
+| Maximum number of databases and containers per account | 500 |
+| Maximum number of containers per database with shared throughput | 25 |
| Maximum number of regions | No limit (All Azure regions) | ### Serverless | Resource | Limit | | | |
-| Maximum number of databases per account | 100 |
-| Maximum number of containers per account | 100 |
+| Maximum number of databases and containers per account | 100 |
| Maximum number of regions | 1 (Any Azure region) | ## Per-container limits
Azure Cosmos DB supports [CRUD and query operations](/rest/api/cosmos-db/) again
| Maximum response size (for example, paginated query) | 4 MB | | Maximum number of operations in a transactional batch | 100 |
+Azure Cosmos DB supports execution of triggers during writes. The service supports a maximum of one pre-trigger and one post-trigger per write operation.
+ Once an operation like query reaches the execution timeout or response size limit, it returns a page of results and a continuation token to the client to resume execution. There's no practical limit on the duration a single query can run across pages/continuations. Azure Cosmos DB uses HMAC for authorization. You can use either a primary key, or a [resource token](secure-access-to-data.md) for fine-grained access control to resources. These resources can include containers, partition keys, or items. The following table lists limits for authorization tokens in Azure Cosmos DB.
Azure Cosmos DB uses HMAC for authorization. You can use either a primary key, o
┬╣ You can increase it by [filing an Azure support ticket](create-support-request-quota-increase.md)
-Azure Cosmos DB supports execution of triggers during writes. The service supports a maximum of one pre-trigger and one post-trigger per write operation.
-
-## Metadata request limits
-
-Azure Cosmos DB maintains system metadata for each account. This metadata allows you to enumerate collections, databases, other Azure Cosmos DB resources, and their configurations for free of charge.
-
-| Resource | Limit |
-| | |
-|Maximum collection create rate per minute| 100|
-|Maximum Database create rate per minute| 100|
-|Maximum provisioned throughput update rate per minute| 5|
-|Maximum throughput supported by an account for metadata operations | 240 RU/s |
- ## Limits for autoscale provisioned throughput See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article and [FAQ](autoscale-faq.yml#lowering-the-max-ru-s) for more detailed explanation of the throughput and storage limits with autoscale.
The following table lists the limits specific to MongoDB feature support. Other
| Resource | Limit | | | |
+| Maximum size of a document | 16 MB (UTF-8 length of JSON representation) ┬╣ |
| Maximum MongoDB query memory size (This limitation is only for 3.2 server version) | 40 MB | | Maximum execution time for MongoDB operations (for 3.2 server version)| 15 seconds| | Maximum execution time for MongoDB operations (for 3.6 and 4.0 server version)| 60 seconds| | Maximum level of nesting for embedded objects / arrays on index definitions | 6 |
-| Idle connection timeout for server side connection closure ┬╣ | 30 minutes |
+| Idle connection timeout for server side connection closure ┬▓ | 30 minutes |
+
+┬╣ Large document sizes up to 16 MB require feature enablement in Azure portal. Read the [feature documentation](../cosmos-db/mongodb/feature-support-42.md#data-types) to learn more.
-┬╣ We recommend that client applications set the idle connection timeout in the driver settings to 2-3 minutes because the [default timeout for Azure LoadBalancer is 4 minutes](../load-balancer/load-balancer-tcp-idle-timeout.md). This timeout ensures that an intermediate load balancer idle doesn't close connections between the client machine and Azure Cosmos DB.
+┬▓ We recommend that client applications set the idle connection timeout in the driver settings to 2-3 minutes because the [default timeout for Azure LoadBalancer is 4 minutes](../load-balancer/load-balancer-tcp-idle-timeout.md). This timeout ensures that an intermediate load balancer idle doesn't close connections between the client machine and Azure Cosmos DB.
## Try Azure Cosmos DB Free limits
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
except exceptions.CosmosResourceExistsError:
print('A container with already exists') ```
-## Optional - Disable analytical store in a SQL API container
+## Optional - Disable analytical store
-Analytical store can be disabled in SQL API containers using Azure CLI or PowerShell, by setting `analytical TTL` to `0`.
+Analytical store can be disabled in SQL API containers or in MongoDB API collections, using Azure CLI or PowerShell. It is done by setting `analytical TTL` to `0`.
> [!NOTE] > Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled. > [!NOTE]
-> Please note that disabling analytical store is not available for MongoDB API collections.
-
+> Please note that currently it is not possible to disable Synapse Link from a database account.
## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
Previously updated : 03/08/2023 Last updated : 04/14/2023
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-This article describes how to manage various tasks on an Azure Cosmos DB account by using the Azure portal.
+This article describes how to manage various tasks on an Azure Cosmos DB account by using the Azure portal. Azure Cosmos DB can also be managed with other Azure management clients including [Azure PowerShell](manage-with-powershell.md), [Azure CLI](nosql/manage-with-cli.md), [Azure Resource Manager templates](./manage-with-templates.md), [Bicep](nosql/manage-with-bicep.md), and [Terraform](nosql/samples-terraform.md).
> [!TIP]
-> Azure Cosmos DB can also be managed with other Azure management clients including [Azure PowerShell](manage-with-powershell.md), [Azure CLI](sql/manage-with-cli.md), [Azure Resource Manager templates](./manage-with-templates.md), and [Bicep](sql/manage-with-bicep.md).
+> The management API for Azure Cosmos DB or *control plane* is not designed for high request volumes like the rest of the service. To learn more see [Control Plane Service Limits](concepts-limits.md#control-plane)
## Create an account
After an Azure Cosmos DB account is configured for service-managed failover, the
:::image type="content" source="./media/how-to-manage-database-account/manual-failover.png" alt-text="Screenshot of the manual failover portal menu."::: + ## Next steps For more information and examples on how to manage the Azure Cosmos DB account as well as databases and containers, read the following articles:
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Title: Configure role-based access control with Azure AD
description: Learn how to configure role-based access control with Azure Active Directory for your Azure Cosmos DB account ++ Last updated : 04/14/2023 -- Previously updated : 02/27/2023
The Azure Cosmos DB data plane role-based access control is built on concepts th
## <a id="permission-model"></a> Permission model > [!IMPORTANT]
-> This permission model covers only database operations that involve reading and writing data. It does *not* cover any kind of management operations on management resources, for example:
->
+> This permission model covers only database operations that involve reading and writing data. It **does not** cover any kind of management operations on management resources, including:
> - Create/Replace/Delete Database > - Create/Replace/Delete Container
-> - Replace Container Throughput
+> - Read/Replace Container Throughput
> - Create/Replace/Delete/Read Stored Procedures > - Create/Replace/Delete/Read Triggers > - Create/Replace/Delete/Read User Defined Functions
The Azure Cosmos DB SDKs issue read-only metadata requests during initialization
- The partition key of your containers or their indexing policy. - The list of physical partitions that make a container and their addresses.
-They don't* fetch any of the data that you've stored in your account.
+They **do not** fetch any of the data that you've stored in your account.
To ensure the best transparency of our permission model, these metadata requests are explicitly covered by the `Microsoft.DocumentDB/databaseAccounts/readMetadata` action. This action should be allowed in every situation where your Azure Cosmos DB account is accessed through one of the Azure Cosmos DB SDKs. It can be assigned (through a role assignment) at any level in the Azure Cosmos DB hierarchy (that is, account, database, or container).
The actual metadata requests allowed by the `Microsoft.DocumentDB/databaseAccoun
| Database | &bull; Reading database metadata <br /> &bull; Listing the containers under the database <br /> &bull; For each container under the database, the allowed actions at the container scope | | Container | &bull; Reading container metadata <br /> &bull; Listing physical partitions under the container <br /> &bull; Resolving the address of each physical partition |
+> [!IMPORTANT]
+> Throughput is not included in the metadata for this action.
+ ## Built-in role definitions Azure Cosmos DB exposes two built-in role definitions:
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
Previously updated : 02/27/2023 Last updated : 04/20/2023
-# Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB for MongoDB
+# Premigration steps for data migrations from MongoDB to Azure Cosmos DB for MongoDB
[!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
Your goal in pre-migration is to:
Follow these steps to perform a thorough pre-migration
-1. [Discover your existing MongoDB resources and create a data estate spreadsheet to track them](#pre-migration-discovery)
-2. [Assess the readiness of your existing MongoDB resources for data migration](#pre-migration-assessment)
-3. [Map your existing MongoDB resources to new Azure Cosmos DB resources](#pre-migration-mapping)
-4. [Plan the logistics of migration process end-to-end, before you kick off the full-scale data migration](#execution-logistics)
+1. [Discover your existing MongoDB resources and Assess the readiness of your existing MongoDB resources for data migration](#pre-migration-assessment)
+2. [Map your existing MongoDB resources to new Azure Cosmos DB resources](#pre-migration-mapping)
+3. [Plan the logistics of migration process end-to-end, before you kick off the full-scale data migration](#execution-logistics)
Then, execute your migration in accordance with your pre-migration plan.
All of the above steps are critical for ensuring a successful migration.
When you plan a migration, we recommend that whenever possible you plan at the per-resource level.
-The [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB)(DMA) assists you with the [Discovery](#programmatic-discovery-using-the-database-migration-assistant) and [Assessment](#programmatic-assessment-using-the-database-migration-assistant) stages of the planning.
-
-## Pre-migration discovery
-
-The first pre-migration step is resource discovery. In this step, you need to create a **data estate migration spreadsheet**.
+## Pre-migration assessment
-* This sheet contains a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
-* The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end.
-* You're recommended to extend this document and use it as a tracking document throughout the migration process.
+The first pre-migration step is to discover your existing MongoDB resources and assess the readiness of your resources for migration.
-### Programmatic discovery using the Database Migration Assistant
+Discovery involves creating a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
-You may use the [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
+Assessment involves finding out whether you're using the [features and syntax that are supported](./feature-support-42.md). It also includes making sure you're adhering to the [limits and quotas](../concepts-limits.md#per-account-limits). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
-It's easy to [setup and run DMA](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
+There are 3 ways to complete the pre-migration assessment, we recommend you to use the [Azure Cosmos DB Migration for MongoDB extension](#azure-cosmos-db-migration-for-mongodb-extension).
-You can use either one of the following DMA output files as the data estate migration spreadsheet:
+### Azure Cosmos DB Migration for MongoDB extension
-* `workload_database_details.csv` - Gives a database-level view of the source workload. Columns in file are: Database Name, Collection count, Document Count, Average Document Size, Data Size, Index Count and Index Size.
-* `workload_collection_details.csv` - Gives a collection-level view of the source workload. Columns in file are: Database Name, Collection Name, Doc Count, Average Document Size, Data size, Index Count, Index Size and Index definitions.
+The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
-Here's a sample database-level migration spreadsheet created by DMA:
-| DB Name | Collection Count | Doc Count | Avg Doc Size | Data Size | Index Count | Index Size |
-| | | | | | | |
-| `bookstoretest` | 2 | 192200 | 4144 | 796572532 | 7 | 260636672 |
-| `cosmosbookstore` | 1 | 96604 | 4145 | 400497620 | 1 | 1814528 |
-| `geo` | 2 | 25554 | 252 | 6446542 | 2 | 266240 |
-| `kagglemeta` | 2 | 87934912 | 190 | 16725184704 | 2 | 891363328 |
-| `pe_orig` | 2 | 57703820 | 668 | 38561434711 | 2 | 861605888 |
-| `portugeseelection` | 2 | 30230038 | 687 | 20782985862 | 1 | 450932736 |
-| `sample_mflix` | 5 | 75583 | 691 | 52300763 | 5 | 798720 |
-| `test` | 1 | 22 | 545 | 12003 | 0 | 0 |
-| `testcol` | 26 | 46 | 88 | 4082 | 32 | 589824 |
-| `testhav` | 3 | 2 | 528 | 1057 | 3 | 36864 |
-| **TOTAL:** | **46** | **176258781** | | **72.01 GB** | | **2.3 GB** |
+> [!NOTE]
+> Azure Cosmos DB Migration for MongoDB extension does not perform an end-to-end assessment. We recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
-### Manual discovery
+### Manual discovery (legacy)
-Alternately, you may refer to the sample spreadsheet in this guide and create a similar document yourself.
+Alternatively, you could create a **data estate migration spreadsheet**. The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end and use it as a tracking document throughout the migration process.
+* This sheet contains a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
* The spreadsheet should be structured as a record of your data estate resources, in list form. * Each row corresponds to a resource (database or collection). * Each column corresponds to a property of the resource; start with at least *name* and *data size (GB)* as columns.
Here are some tools you can use for discovering resources:
* [MongoDB Shell](https://www.mongodb.com/try/download/shell) * [MongoDB Compass](https://www.mongodb.com/try/download/compass)
-## Pre-migration assessment
-
-Second, as a prelude to planning your migration, assess the readiness of resources in your data estate for migration.
-
-Assessment involves finding out whether you're using the [features and syntax that are supported](./feature-support-42.md). It also includes making sure you're adhering to the [limits and quotas](../concepts-limits.md#per-account-limits). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
-
-### Programmatic assessment using the Database Migration Assistant
+Go through the spreadsheet and verify each collection against the [supported features and syntax](./feature-support-42.md), and [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail.
-[Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) also assists you with the assessment stage of pre-migration planning.
-
-Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
-
-The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
-
-The results are printed as an output in the DMA notebook and saved to a CSV file - `assessment_result.csv`.
+### Database Migration Assistant utility (legacy)
> [!NOTE]
-> Database Migration Assistant is a preliminary utility meant to assist you with the pre-migration steps. It does not perform an end-to-end assessment.
-> In addition to running the DMA, we also recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
+> Database Migration Assistant is a legacy utility meant to assist you with the pre-migration steps. We recommend you to use the [Azure Cosmos DB Migration for MongoDB extension](#azure-cosmos-db-migration-for-mongodb-extension) for all pre-migration steps.
+
+You may use the [Database Migration Assistant](programmatic-database-migration-assistant-legacy.md) (DMA) utility to assist you with pre-migration steps.
## Pre-migration mapping
cosmos-db Programmatic Database Migration Assistant Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/programmatic-database-migration-assistant-legacy.md
+
+ Title: Database Migration Assistant utility
+
+description: This doc provides an overview of the Database Migration Assistant legacy utility.
++++++ Last updated : 04/20/2023++
+# Database Migration Assistant utility (legacy)
++
+> [!IMPORTANT]
+> Database Migration Assistant is a preliminary legacy utility meant to assist you with the pre-migration steps. Microsoft recommends you to use the [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) for all pre-migration steps.
+
+### Programmatic discovery using the Database Migration Assistant
+
+You may use the [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
+
+It's easy to [setup and run DMA](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
+
+You can use either one of the following DMA output files as the data estate migration spreadsheet:
+
+* `workload_database_details.csv` - Gives a database-level view of the source workload. Columns in file are: Database Name, Collection count, Document Count, Average Document Size, Data Size, Index Count and Index Size.
+* `workload_collection_details.csv` - Gives a collection-level view of the source workload. Columns in file are: Database Name, Collection Name, Doc Count, Average Document Size, Data size, Index Count, Index Size and Index definitions.
+
+Here's a sample database-level migration spreadsheet created by DMA:
+
+| DB Name | Collection Count | Doc Count | Avg Doc Size | Data Size | Index Count | Index Size |
+| | | | | | | |
+| `bookstoretest` | 2 | 192200 | 4144 | 796572532 | 7 | 260636672 |
+| `cosmosbookstore` | 1 | 96604 | 4145 | 400497620 | 1 | 1814528 |
+| `geo` | 2 | 25554 | 252 | 6446542 | 2 | 266240 |
+| `kagglemeta` | 2 | 87934912 | 190 | 16725184704 | 2 | 891363328 |
+| `pe_orig` | 2 | 57703820 | 668 | 38561434711 | 2 | 861605888 |
+| `portugeseelection` | 2 | 30230038 | 687 | 20782985862 | 1 | 450932736 |
+| `sample_mflix` | 5 | 75583 | 691 | 52300763 | 5 | 798720 |
+| `test` | 1 | 22 | 545 | 12003 | 0 | 0 |
+| `testcol` | 26 | 46 | 88 | 4082 | 32 | 589824 |
+| `testhav` | 3 | 2 | 528 | 1057 | 3 | 36864 |
+| **TOTAL:** | **46** | **176258781** | | **72.01 GB** | | **2.3 GB** |
+
+### Programmatic assessment using the Database Migration Assistant
+
+[Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) also assists you with the assessment stage of pre-migration planning.
+
+Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
+
+The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
+
+The results are printed as an output in the DMA notebook and saved to a CSV file - `assessment_result.csv`.
+
+> [!NOTE]
+> Database Migration Assistant does not perform an end-to-end assessment. It is a preliminary utility meant to assist you with the pre-migration steps.
+
cosmos-db Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-reference.md
Previously updated : 12/07/2020 Last updated : 04/14/2023
All the metrics corresponding to Azure Cosmos DB are stored in the namespace **A
|Metric (Metric Display Name)|Unit (Aggregation Type) |Description|Dimensions| Time granularities| Legacy metric mapping | Usage | ||||| | | | | TotalRequests (Total Requests) | Count (Count) | Number of requests made| DatabaseName, CollectionName, Region, StatusCode| All | TotalRequests, Http 2xx, Http 3xx, Http 400, Http 401, Internal Server error, Service Unavailable, Throttled Requests, Average Requests per Second | Used to monitor requests per status code, container at a minute granularity. To get average requests per second, use Count aggregation at minute and divide by 60. |
-| MetadataRequests (Metadata Requests) |Count (Count) | Count of metadata requests. Azure Cosmos DB maintains system metadata container for each account, that allows you to enumerate collections, databases, etc., and their configurations, free of charge. | DatabaseName, CollectionName, Region, StatusCode| All| |Used to monitor throttles due to metadata requests.|
+| MetadataRequests (Metadata Requests) |Count (Count) | Count of Azure Resource Manager metadata requests. Metadata has request limits. See [Control Plane Limits](concepts-limits.md#control-plane) for more information. | DatabaseName, CollectionName, Region, StatusCode| All | | Used to monitor metadata requests in scenarios where requests are being throttled. See [Monitor Control Plane Requests](use-metrics.md#monitor-control-plane-requests) for more information. |
| MongoRequests (Mongo Requests) | Count (Count) | Number of Mongo Requests Made | DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Rate, Mongo Update Request Rate, Mongo Delete Request Rate, Mongo Insert Request Rate, Mongo Count Request Rate| Used to monitor Mongo request errors, usages per command type. | ### Request Unit metrics
The following table lists the properties of resource logs in Azure Cosmos DB. Th
| **duration** | **duration_d** | The duration of the operation, in milliseconds. | | **requestLength** | **requestLength_s** | The length of the request, in bytes. | | **responseLength** | **responseLength_s** | The length of the response, in bytes.|
-| **resourceTokenPermissionId** | **resourceTokenPermissionId_s** | This property indicates the resource token permission Id that you have specified. To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
+| **resourceTokenPermissionId** | **resourceTokenPermissionId_s** | This property indicates the resource token permission ID that you have specified. To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
| **resourceTokenPermissionMode** | **resourceTokenPermissionMode_s** | This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read". To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. | | **resourceTokenUserRid** | **resourceTokenUserRid_s** | This value is non-empty when [resource tokens](./secure-access-to-data.md#resource-tokens) are used for authentication. The value points to the resource ID of the user. | | **responseLength** | **responseLength_s** | The length of the response, in bytes.|
cosmos-db Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md
As detailed in the [introduction](#available-connectivity-modes), Direct mode cl
### Routing
-When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node. It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes and both are considered [metadata](../concepts-limits.md#metadata-request-limits). Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
+When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node. It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes and both are considered [Control Plane metadata](../concepts-limits.md#control-plane). Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
Each replica set contains one primary replica and three secondaries. Write operations are always routed to primary replica nodes while read operations can be served from primary or secondary nodes.
There are two factors that dictate the number of TCP connections the SDK will op
Each established connection can serve a configurable number of concurrent operations. If the volume of concurrent operations exceeds this threshold, new connections will be open to serve them, and it's possible that for a physical partition, the number of open connections exceeds the steady state number. This behavior is expected for workloads that might have spikes in their operational volume. For the .NET SDK this configuration is set by [CosmosClientOptions.MaxRequestsPerTcpConnection](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxrequestspertcpconnection), and for the Java SDK you can customize using [DirectConnectionConfig.setMaxRequestsPerConnection](/java/api/com.azure.cosmos.directconnectionconfig.setmaxrequestsperconnection).
-By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and effect overall performance.
+By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and affect overall performance.
### Language specific implementation details
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-rate-too-large.md
Follow the guidance in [Step 1](#step-1-check-the-metrics-to-determine-the-perce
Another common question that arises is, **Why is normalized RU consumption 100%, but autoscale didn't scale to the max RU/s?** This typically occurs for workloads that have temporary or intermittent spikes of usage. When you use autoscale, Azure Cosmos DB only scales the RU/s to the maximum throughput when the normalized RU consumption is 100% for a sustained, continuous period of time in a 5-second interval. This is done to ensure the scaling logic is cost friendly to the user, as it ensures that single, momentary spikes to not lead to unnecessary scaling and higher cost. When there are momentary spikes, the system typically scales up to a value higher than the previously scaled to RU/s, but lower than the max RU/s. Learn more about how to [interpret the normalized RU consumption metric with autoscale](../monitor-normalized-request-units.md#normalized-ru-consumption-and-autoscale).+ ## Rate limiting on metadata requests Metadata rate limiting can occur when you're performing a high volume of metadata operations on databases and/or containers. Metadata operations include:
Metadata rate limiting can occur when you're performing a high volume of metadat
- List databases or containers in an Azure Cosmos DB account - Query for offers to see the current provisioned throughput
-There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits).
+There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [Control Plane Service Limits](../concepts-limits.md#control-plane).
#### How to investigate Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md
Previously updated : 03/13/2023 Last updated : 04/14/2023
IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
*QueryMetrics* provides details on how long each component of the query took to execute. The most common root cause for long running queries is scans, meaning the query was unable to apply the indexes. This problem can be resolved with a better filter condition.
+## Monitor control plane requests
+
+Azure Cosmos DB applies limits on the number of metadata requests that can be made over consecutive 5 minute intervals. Control plane requests which go over these limits may experience throttling. Metadata requests may in some cases, consume throughput against a `master partition` within an account that contains all of an account's metadata. Control plane requests which go over the throughput amount will experience rate limiting (429s).
+
+To get started, head to the [Azure portal](https://portal.azure.com) and navigate to the **Insights** pane. From this pane, open the **System** tab. The System tab shows two charts. One that shows all metadata requests for an account. The second shows metadata requests throughput consumption from the account's `master partition` that stores an account's metadata.
+++
+The Metadata Request by Status Code graph above aggregates requests at increasing larger granularity as you increase the Time Range. The largest Time Range you can use for a 5 minute time bin is 4 hours. To monitor metadata requests over a greater time range with specific granularity, use Azure Metrics. Create a new chart and select Metadata requests metric. In the upper right corner select 5 minutes for Time granularity as seen below. Metrics also allow for users to [Create Alerts](create-alerts.md) on them which makes them more useful than Insights.
+++ ## Next steps You might want to learn more about improving database performance by reading the following articles:
cost-management-billing Reservation Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-application.md
Read the following articles that apply to you to learn how discounts apply to a
- [Azure SQL Edge](discount-sql-edge.md) - [Database for MariaDB](understand-reservation-charges-mariadb.md) - [Database for MySQL](understand-reservation-charges-mysql.md)-- [Database for PostgreSQL](understand-reservation-charges-postgresql.md)
+- [Database for PostgreSQL](../../postgresql/single-server/concept-reserved-pricing.md)
- [Databricks](reservation-discount-databricks.md) - [Data Explorer](understand-azure-data-explorer-reservation-charges.md) - [Dedicated Hosts](billing-understand-dedicated-hosts-reservation-charges.md)
cost-management-billing Understand Reservation Charges Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reservation-charges-postgresql.md
- Title: Understand reservation discount - Azure Database for PostgreSQL Single server
-description: Learn how a reservation discount is applied to Azure Database for PostgreSQL Single servers.
----- Previously updated : 12/06/2022--
-# How a reservation discount is applied to Azure Database for PostgreSQL Single server
-
-After you buy an Azure Database for PostgreSQL Single server reserved capacity, the reservation discount is automatically applied to PostgreSQL Single servers databases that match the attributes and quantity of the reservation. A reservation covers only the compute costs of your Azure Database for PostgreSQL Single server. You're charged for storage and networking at the normal rates.
-
-## How reservation discount is applied
-
-A reservation discount is ***use-it-or-lose-it***. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
-
-When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
-
-Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
-
-## Discount applied to Azure Database for PostgreSQL Single server
-
-The Azure Database for PostgreSQL Single server reserved capacity discount is applied to running your PostgreSQL Single server on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running Azure Database for PostgreSQL Single server. For PostgreSQL Single servers that don't run the full hour, the reservation is automatically applied to other Azure Database for PostgreSQL Single server matching the reservation attributes. The discount can apply to Azure Database for PostgreSQL Single servers that are running concurrently. If you don't have a PostgreSQL Single server that run for the full hour that matches the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
-
-The following examples show how the Azure Database for PostgreSQL Single server reserved capacity discount applies depending on the number of cores you bought, and when they're running.
-
-* **Example 1**: You buy an Azure Database for PostgreSQL Single server reserved capacity for an 8 vCore. If you are running a 16 vCore Azure Database for PostgreSQL Single server that matches the rest of the attributes of the reservation, you're charged the pay-as-you-go price for 8 vCore of your PostgreSQL Single server compute usage and you get the reservation discount for one hour of 8 vCore PostgreSQL Single server compute usage.</br>
-
-For the rest of these examples, assume that the Azure Database for PostgreSQL Single server reserved capacity you buy is for a 16 vCore Azure Database for PostgreSQL Single server and the rest of the reservation attributes match the running PostgreSQL Single servers.
-
-* **Example 2**: You run two Azure Database for PostgreSQL Single servers with 8 vCore each for an hour. The 16 vCore reservation discount is applied to compute usage for both the 8 vCore Azure Database for PostgreSQL Single server.
-
-* **Example 3**: You run one 16 vCore Azure Database for PostgreSQL Single server from 1 pm to 1:30 pm. You run another 16 vCore Azure Database for PostgreSQL Single server from 1:30 to 2 pm. Both are covered by the reservation discount.
-
-* **Example 4**: You run one 16 vCore Azure Database for PostgreSQL Single server from 1 pm to 1:45 pm. You run another 16 vCore Azure Database for PostgreSQL Single server from 1:30 to 2 pm. You're charged the pay-as-you-go price for the 15-minute overlap. The reservation discount applies to the compute usage for the rest of the time.
-
-To understand and view the application of your Azure Reservations in billing usage reports, see [Understand Azure reservation usage](./understand-reserved-instance-usage-ea.md).
-
-## Next steps
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Create Sql License Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/create-sql-license-assignments.md
Title: Create SQL Server license assignments for Azure Hybrid Benefit
description: This article explains how to create SQL Server license assignments for Azure Hybrid Benefit. Previously updated : 04/06/2023 Last updated : 04/20/2023
# Create SQL Server license assignments for Azure Hybrid Benefit
-The new centralized Azure Hybrid Benefit (preview) experience in the Azure portal supports SQL Server license assignments at the account level or at a particular subscription level. When the assignment is created at the account level, Azure Hybrid Benefit discounts are automatically applied to SQL resources in all subscriptions in the account up to the license count specified in the assignment.
-
-> [!IMPORTANT]
-> Centrally-managed Azure Hybrid Benefit is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+The centralized Azure Hybrid Benefit experience in the Azure portal supports SQL Server license assignments at the account level or at a particular subscription level. When the assignment is created at the account level, Azure Hybrid Benefit discounts are automatically applied to SQL resources in all the account's subscriptions, up to the license quantity specified in the assignment.
For each license assignment, a scope is selected and then licenses are assigned to the scope. Each scope can have multiple license entries.
+Here's a video demonstrating how [centralized Azure Hybrid Benefit works](https://www.youtube.com/watch?v=LvtUXO4wcjs).
+ ## Prerequisites The following prerequisites must be met to create SQL Server license assignments.
The following prerequisites must be met to create SQL Server license assignments
- Your organization has a supported agreement type and supported offer. - You're a member of a role that has permissions to assign SQL licenses. - Your organization has SQL Server core licenses with Software Assurance or core subscription licenses available to assign to Azure.-- Your organization is enrolled to automatic registration of the Azure SQL VMs with the IaaS extension. To learn more, see [Automatic registration with SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms).
+- Your organization is already enrolled to automatic registration of the Azure SQL VMs with the IaaS extension. To learn more, see [SQL IaaS extension registration options for Cost Management administrators](sql-iaas-extension-registration.md).
> [!IMPORTANT] > Failure to meet this prerequisite will cause Azure to produce incomplete data about your current Azure Hybrid Benefit usage. This situation could lead to incorrect license assignments and might result in unnecessary pay-as-you-go charges for SQL Server licenses.
In the following procedure, you navigate from **Cost Management + Billing** to *
:::image type="content" source="./media/create-sql-license-assignments/select-billing-profile.png" alt-text="Screenshot showing billing profile selection." lightbox="./media/create-sql-license-assignments/select-billing-profile.png" ::: 1. In the left menu, select **Reservations + Hybrid Benefit**. :::image type="content" source="./media/create-sql-license-assignments/select-reservations.png" alt-text="Screenshot showing Reservations + Hybrid Benefit selection." :::
-1. Select **Add** and then in the list, select **Azure Hybrid Benefit (Preview)**.
+1. Select **Add** and then in the list, select **Azure Hybrid Benefit**.
:::image type="content" source="./media/create-sql-license-assignments/select-azure-hybrid-benefit.png" alt-text="Screenshot showing Azure Hybrid Benefit selection." lightbox="./media/create-sql-license-assignments/select-azure-hybrid-benefit.png" :::
-1. On the next screen, select **Begin to assign licenses**.
- :::image type="content" source="./media/create-sql-license-assignments/get-started-centralized.png" alt-text="Screenshot showing Add SQL hybrid benefit selection" lightbox="./media/create-sql-license-assignments/get-started-centralized.png" :::
+1. On the next screen, select **Assign licenses**.
+ :::image type="content" source="./media/create-sql-license-assignments/get-started-centralized.png" alt-text="Screenshot showing Get started with Centrally Managed Azure Hybrid Benefit selection." lightbox="./media/create-sql-license-assignments/get-started-centralized.png" :::
If you don't see the page, and instead see the message `You are not the Billing Admin on the selected billing scope` then you don't have the required permission to assign a license. If so, you need to get the required permission. For more information, see [Prerequisites](#prerequisites).
-1. Choose a scope and then enter the license count to use for each SQL Server edition. If you don't have any licenses to assign for a specific edition, enter zero.
- > [!NOTE]
- > You are accountable to determine that the entries that you make in the scope-level managed license experience are accurate and will satisfy your licensing obligations. The license usage information is shown to assist you as you make your license assignments. However, the information shown could be incomplete or inaccurate due to various factors.
- >
- > If the number of licenses that you enter is less than what you are currently using, you'll see a warning message stating _You've entered fewer licenses than you're currently using for Azure Hybrid Benefit in this scope. Your bill for this scope will increase._
+1. Choose the scope and coverage option for the number of qualifying licenses that you want to assign.
+1. Select the date that you want to review the license assignment. For example, you might set it to the agreement renewal or anniversary date. Or you might set it to the subscription renewal date for the source of the licenses.
:::image type="content" source="./media/create-sql-license-assignments/select-assignment-scope-edition.png" alt-text="Screenshot showing scope selection and number of licenses." lightbox="./media/create-sql-license-assignments/select-assignment-scope-edition.png" :::
-1. Optionally, select the **Usage details** tab to view your current Azure Hybrid Benefit usage enabled at the resource scope.
- :::image type="content" source="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" alt-text="Screenshot showing Usage tab details." lightbox="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" :::
+1. Optionally, select **See usage details** to view your current Azure Hybrid Benefit usage enabled at the resource scope.
+ :::image type="content" source="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" alt-text="Screenshot showing the Usage details tab." lightbox="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" :::
1. Select **Add**.
-1. Optionally, change the default license assignment name. The review date is automatically set to a year ahead and can't be changed. Its purpose is to remind you to periodically review your license assignments.
+1. Optionally, change the default license assignment name. The review date is automatically set to a year but you can change it. Its purpose is to remind you to periodically review your license assignments.
:::image type="content" source="./media/create-sql-license-assignments/license-assignment-commit.png" alt-text="Screenshot showing default license assignment name." lightbox="./media/create-sql-license-assignments/license-assignment-commit.png" ::: 1. After you review your choices, select **Next: Review + apply**.
-1. Select the **By selecting &quot;Apply&quot;** attestation option to confirm that you have authority to apply Azure Hybrid Benefit, enough SQL Server licenses, and that you'll maintain the licenses as long as they're assigned.
+1. Select the **By selecting &quot;Apply&quot;** attestation option to confirm that you have authority to apply Azure Hybrid Benefit, enough SQL Server licenses, and that you maintain the licenses as long as they're assigned.
:::image type="content" source="./media/create-sql-license-assignments/confirm-apply-attestation.png" alt-text="Screenshot showing the attestation option." lightbox="./media/create-sql-license-assignments/confirm-apply-attestation.png" :::
-1. Select **Apply** and then select **Yes.**
+1. At the bottom of the page, select **Apply** and then select **Yes.**
1. The list of assigned licenses is shown. :::image type="content" source="./media/create-sql-license-assignments/assigned-licenses.png" alt-text="Screenshot showing the list of assigned licenses." lightbox="./media/create-sql-license-assignments/assigned-licenses.png" :::
Under **Last Day Utilization** or **7-day Utilization**, select a percentage, wh
:::image type="content" source="./media/create-sql-license-assignments/assignment-utilization-view.png" alt-text="Screenshot showing assignment usage details." lightbox="./media/create-sql-license-assignments/assignment-utilization-view.png" :::
-If a license assignment's usage is 100%, then it's likely some resources within the scope are incurring pay-as-you-go charges for SQL Server. We recommend that you use the license assignment experience again to review how much usage is being covered or not by assigned licenses. Afterward, go through the same process as before, including consulting your procurement or software asset management department, confirming that more licenses are available, and assigning the licenses.
+If a license assignment's usage is 100%, then it's likely some resources within the scope are incurring pay-as-you-go charges for SQL Server. We recommend that you use the license assignment experience again to review how much usage is being covered or not by assigned licenses. Afterward, go through the same process as before. That includes consulting your procurement or software asset management department, confirming that more licenses are available, and assigning the licenses.
## Changes after license assignment After you create SQL license assignments, your experience with Azure Hybrid Benefit changes in the Azure portal. -- Any existing Azure Hybrid Benefit elections configured for individual SQL resources no longer apply. They're replaced by the SQL license assignment created at the subscription or account level.
+- Any existing Azure Hybrid Benefit elections configured for individual SQL resources no longer apply. The SQL license assignment created at the subscription or account level replaces them.
- The hybrid benefit option isn't shown as in your SQL resource configuration. - Applications or scripts that configure the hybrid benefit programmatically continue to work, but the setting doesn't have any effect.-- SQL software discounts are applied to the SQL resources in the scope. The scope is based on the number of licenses in the license assignments that are created for the subscription for the account where the resource was created.-- A specific resource configured for hybrid benefit might not get the discount if other resources consume all of the licenses. However, the maximum discount is applied to the scope, based on number of license counts. For more information about how the discounts are applied, see [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md).
+- SQL software discounts are applied to the SQL resources in the scope. The scope is based on the quantity in the license assignments created for the subscription in the account where the resource was created.
+- A specific resource configured for hybrid benefit might not get the discount if other resources consume all of the licenses. However, the maximum discount is applied to the scope, based on number of license counts. For more information about how the discounts are applied, see [What is centrally managed Azure Hybrid Benefit for SQL Server?](overview-azure-hybrid-benefit-scope.md)
## Cancel a license assignment
Review your license situation before you cancel your license assignments. When y
## Next steps - Review the [Centrally managed Azure Hybrid Benefit FAQ](faq-azure-hybrid-benefit-scope.yml).-- Learn about how discounts are applied at [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md).
+- Learn about how discounts are applied at [What is centrally managed Azure Hybrid Benefit for SQL Server?](overview-azure-hybrid-benefit-scope.md)
cost-management-billing Manage Licenses Centrally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/manage-licenses-centrally.md
description: This article provides a detailed explanation about how Azure applie
keywords: Previously updated : 12/06/2022 Last updated : 04/20/2023
# How Azure applies centrally assigned SQL licenses to hourly usage
-This article provides details about how centrally managing Azure Hybrid Benefit for SQL Server at a scope-level works. The process starts with an administrator assigning licenses to subscription or a billing account scope.
+This article provides details about how centrally managing Azure Hybrid Benefit for SQL Server works. The process starts with an administrator assigning licenses to subscription or a billing account scope.
-Each resource reports its usage once an hour using the appropriate full price or pay-as-you-go meters. Internally in Azure, the Usage Application engine evaluates the available NCLs and applies them for that hour. For a given hour of vCore resource consumption, the pay-as-you-go meters are switched to the corresponding Azure Hybrid Benefit meter with a zero ($0) price if there's enough unutilized NCLs in the selected scope.
+Each resource reports its usage once an hour using the appropriate full price or pay-as-you-go meters. Internally in Azure, the Usage Application engine evaluates the available normalized cores (NCs) and applies them for that hour. For a given hour of vCore resource consumption, the pay-as-you-go meters are switched to the corresponding Azure Hybrid Benefit meter with a zero ($0) price if there's enough unutilized NCs in the selected scope.
## License discount
-The following diagram shows the discounting process when there's enough unutilized NCLs to discount the entire vCore consumption by all the SQL resources for the hour.
+The following diagram shows the discounting process when there's enough unutilized NCs to discount the entire vCore consumption by all the SQL resources for the hour.
-Prices shown in the following image are for example purposes only.
+Prices shown in the following image are only examples.
:::image type="content" source="./media/manage-licenses-centrally/fully-discounted-consumption.svg" alt-text="Diagram showing fully discounted vCore consumption." border="false" lightbox="./media/manage-licenses-centrally/fully-discounted-consumption.svg":::
-When the vCore consumption by the SQL resources in the scope exceeds the number of unutilized NCLs, the excess vCore consumption is billed using the appropriate pay-as-you-go meter. The following diagram shows the discounting process when the vCore consumption exceeds the number of unutilized NCLs.
+When the vCore consumption by the SQL resources in the scope exceeds the number of unutilized NCs, the excess vCore consumption is billed using the appropriate pay-as-you-go meter. The following diagram shows the discounting process when the vCore consumption exceeds the number of unutilized NCs.
-Prices shown in the following image are for example purposes only.
+Prices shown in the following image are only examples.
:::image type="content" source="./media/manage-licenses-centrally/partially-discounted-consumption.svg" alt-text="Diagram showing partially discounted consumption." border="false" lightbox="./media/manage-licenses-centrally/partially-discounted-consumption.svg":::
The Azure SQL resources covered by the assigned Core licenses can vary from hour
The following diagram shows how the assigned SQL Server licenses apply over time to get the maximum Azure Hybrid Benefit discount. ## Next steps
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
Title: What is centrally managed Azure Hybrid Benefit?
+ Title: What is centrally managed Azure Hybrid Benefit for SQL Server?
description: Azure Hybrid Benefit is a licensing benefit that lets you bring your on-premises core-based Windows Server and SQL Server licenses with active Software Assurance (or subscription) to Azure. keywords: Previously updated : 03/21/2023 Last updated : 04/20/2023
-# What is centrally managed Azure Hybrid Benefit?
+# What is centrally managed Azure Hybrid Benefit for SQL Server?
Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance or subscription-enabled Windows Server and SQL Server licenses on Azure. For more information, see [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
To use centrally managed licenses, you must have a specific role assigned to you
- Billing profile contributor If you don't have one of the roles, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
-At a high level, here's how it works:
+At a high level, here's how centrally managed Azure Hybrid Benefit works:
1. First, confirm that all your SQL Server VMs are visible to you and Azure by enabling automatic registration of the self-installed SQL server images with the IaaS extension. For more information, see [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk).
-1. Under **Cost Management + Billing** in the Azure portal, you (the billing administrator) choose the scope and the number of qualifying licenses that you want to assign to cover the resources in the scope.
- :::image type="content" source="./media/overview-azure-hybrid-benefit-scope/set-scope-assign-licenses.png" alt-text="Screenshot showing setting a scope and assigning licenses." lightbox="./media/overview-azure-hybrid-benefit-scope/set-scope-assign-licenses.png" :::
+1. Under **Cost Management + Billing** in the Azure portal, you (the billing administrator) choose the scope and coverage option for the number of qualifying licenses that you want to assign.
+1. Select the date that you want to review the license assignment. For example, you might set it to the agreement renewal or anniversary date, or the subscription renewal date, for the source of the licenses.
-In the previous example, detected usage for 108 normalized core licenses is needed to cover all eligible Azure SQL resources. Detected usage for individual resources was 56 normalized core licenses. For the example, we showed 60 standard core licenses plus 12 Enterprise core licenses (12 * 4 = 48). So 60 + 48 = 108. Normalized core license values are covered in more detail in the following section, [How licenses apply to Azure resources](#how-licenses-apply-to-azure-resources).
-- Each hour as resources in the scope run, Azure automatically assigns the licenses to them and discounts the costs correctly. Different resources can be covered each hour.
+Let's break down the previous example.
+
+- Detected usage shows that 8 SQL Server standard core licenses and 8 enterprise licenses (equaling 40 normalized cores) need to be assigned to keep the existing level of Azure Hybrid Benefit coverage.
+- To expand coverage to all eligible Azure SQL resources, you need to assign 10 standard and 10 enterprise core licenses (equaling 50 normalized cores).
+ - Normalized cores needed = 1 x (SQL Server standard core license count) + 4 x (enterprise core license count).
+ - From the example again: 1 x (10 standard) + 4 x (10 enterprise) = 50 normalized cores.
+
+Normalized core values are covered in more detail in the following section, [How licenses apply to Azure resources](#how-licenses-apply-to-azure-resources).
+
+Here's a brief summary of how centralized Azure Hybrid Benefit management works:
+
+- Each hour as resources in the scope run, Azure automatically applies the licenses to them and discounts the costs correctly. Different resources can be covered each hour.
- Any usage above the number of assigned licenses is billed at normal, pay-as-you-go prices. - When you choose to manage the benefit by assigning licenses at a scope level, you can't manage individual resources in the scope any longer. The original resource-level way to enable Azure Hybrid Benefit is still available for SQL Server and is currently the only option for Windows Server. It involves a DevOps role selecting the benefit for each individual resource (like a SQL Database or Windows Server VM) when you create or manage it. Doing so results in the hourly cost of that resource being discounted. For more information, see [Azure Hybrid Benefit for Windows Server](/azure/azure-sql/azure-hybrid-benefit).
-Enabling centralized management of Azure Hybrid Benefit of for SQL Server at a subscription or account scope level is currently in preview. It's available to enterprise customers and to customers that buy directly from Azure.com with a Microsoft Customer Agreement. We hope to extend the capability to Windows Server and more customers.
+You can enable centralized management of Azure Hybrid Benefit for SQL Server at a subscription or account scope level. It's available to enterprise customers and to customers that buy directly from Azure.com with a Microsoft Customer Agreement. ItΓÇÖs not currently available for Windows Server customers or to customers who work with a Cloud Solution Provider (CSP) partner that manages Azure for them.
## Qualifying SQL Server licenses
Resource-level Azure Hybrid Benefit management can cover all of those points, to
You get the following benefits: - **A simpler, more scalable approach with better control** - The billing administrator directly assigns available licenses to one or more Azure scopes. The original approach, at a large scale, involves coordinating Azure Hybrid Benefit usage across many resources and DevOps owners.-- **An easy-to-use way to optimize costs** - An Administrator can monitor Azure Hybrid Benefit utilization and directly adjust licenses assigned to Azure. For example, an administrator might see an opportunity to save more money by assigning more licenses to Azure. Then they speak with their procurement department to confirm license availability. Finally, they can easily assign the licenses to Azure and start saving.-- **A better method to manage costs during usage spikes** - You can easily scale up the same resource or add more resources during temporary spikes. You don't need to assign more SQL Server licenses (for example, closing periods or increased holiday shopping). For short-lived workload spikes, pay-as-you-go charges for the extra capacity might cost less than acquiring more licenses to use Azure Hybrid Benefit for the capacity. Managing the benefit at a scope, rather than at a resource-level, helps you to decide based on aggregate usage.
+- **An easy-to-use way to optimize costs** - An Administrator can monitor Azure Hybrid Benefit utilization and directly adjust licenses assigned to Azure. Track SQL Server license utilization and optimize costs to proactively identify other licenses. It helps to maximize savings and receive notifications when license agreements need to be refreshed. For example, an administrator might see an opportunity to save more money by assigning more licenses to Azure. Then they speak with their procurement department to confirm license availability. Finally, they can easily assign the licenses to Azure and start saving.
+- **A better method to manage costs during usage spikes** - You can easily scale up the same resource or add more resources during temporary spikes. You don't need to assign more SQL Server licenses (for example, closing periods or increased holiday shopping). For short-lived workload spikes, pay-as-you-go charges for the extra capacity might cost less than acquiring more licenses to use Azure Hybrid Benefit for the capacity. When you manage the benefit at a scope, rather than at a resource-level, helps you to decide based on aggregate usage.
- **Clear separation of duties to sustain compliance** - In the resource-level Azure Hybrid Benefit model, resource owners might select Azure Hybrid Benefit when there are no licenses available. Or, they might *not* select the benefit when there *are* licenses available. Scope-level management of Azure Hybrid Benefit solves this situation. The billing admins that manage the benefit centrally are positioned to confirm with procurement and software asset management departments how many licenses are available to assign to Azure. The following diagram illustrates the point. :::image type="content" source="./media/overview-azure-hybrid-benefit-scope/duty-separation.svg" alt-text="Diagram showing the separation of duties." border="false" lightbox="./media/overview-azure-hybrid-benefit-scope/duty-separation.svg":::
Both SQL Server Enterprise (core) and SQL Server Standard (core) licenses with S
One rule to understand: One SQL Server Enterprise Edition license has the same coverage as _four_ SQL Server Standard Edition licenses, across all qualified Azure SQL resource types.
-To explain how it works further, the term _normalized core license_ or NCL is used. In alignment with the rule, one SQL Server Standard core license produces one NCL. One SQL Server Enterprise core license produces four NCLs. For example, if you assign four SQL Server Enterprise core licenses and seven SQL Server Standard core licenses, your total coverage and Azure Hybrid Benefit discounting power is equal to 23 NCLs (4\*4+7\*1).
-
-The following table summarizes how many NCLs you need to fully discount the SQL Server license cost for different resource types. Scope-level management of Azure Hybrid Benefit strictly applies the rules in the product terms, summarized as follows.
+The following table summarizes how many normalized cores (NCs) you need to fully discount the SQL Server license cost for different resource types. Scope-level management of Azure Hybrid Benefit strictly applies the rules in the product terms, summarized as follows.
-| **Azure Data Service** | **Service tier** | **Required number of NCLs** |
+| **Azure Data Service** | **Service tier** | **Required number of NCs** |
| | | | | SQL Managed Instance or Instance pool | Business Critical | 4 per vCore | | SQL Managed Instance or Instance pool | General Purpose | 1 per vCore |
The following table summarizes how many NCLs you need to fully discount the SQL
┬╣ *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
-┬▓ *Subject to a minimum of four vCores per Virtual Machine, which translates to four NCL if Standard edition is used, and 16 NCL if Enterprise edition is used.*
+┬▓ *Subject to a minimum of four vCores per Virtual Machine, which translates to four NCs if Standard edition is used, and 16 NCs if Enterprise edition is used.*
## Ongoing scope-level management
-We recommend that you establish a proactive rhythm when centrally managing Azure Hybrid Benefit, similar to the following tasks and order.
+We recommend that you establish a proactive rhythm when centrally managing Azure Hybrid Benefit, like the following tasks and order.
- Engage within your organization to understand how many Azure SQL resources and vCores will be used during the next month, quarter, or year.-- Work with your procurement and software asset management departments to determine if enough SQL core licenses with Software Assurance are available. The benefit allows licenses supporting migrating workloads to be used both on-premises and in Azure for up to 180 days. So, those licenses can be counted as available.
+- Work with your procurement and software asset management departments to determine if enough SQL core licenses with Software Assurance (or subscription core licenses) are available. The benefit allows licenses supporting migrating workloads to be used both on-premises and in Azure for up to 180 days. So, those licenses can be counted as available.
- Assign available licenses to cover your current usage _and_ your expected usage growth during the upcoming period. - Monitor assigned license utilization. - If it approaches 100%, then consult others in your organization to understand expected usage. Confirm license availability then assign more licenses to the scope.
- - If usage is 100%, you might be using resources beyond the number of licenses assigned. Return to the [Create license assignment experience](create-sql-license-assignments.md) and review the usage that Azure shows. Then assign more available licenses to the scope for more coverage.
+ - If usage is 100%, you might be using resources beyond the number of licenses assigned. Return to the [Add Azure Hybrid Benefit experience](create-sql-license-assignments.md) and review the usage. Then assign more available licenses to the scope for more coverage.
- Repeat the proactive process periodically. ## Next steps
cost-management-billing Sql Iaas Extension Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/sql-iaas-extension-registration.md
+
+ Title: SQL IaaS extension registration options for Cost Management administrators
+description: This article explains the SQL IaaS extension registration options available to Cost Management administrators.
+keywords:
++ Last updated : 04/20/2023++++++
+# SQL IaaS extension registration options for Cost Management administrators
+
+This article helps Cost Management administrators understand and address the SQL IaaS registration requirement before they use centrally managed Azure Hybrid Benefit for SQL Server. The article explains the steps that you, or someone in your organization, uses to complete to register SQL Server with the SQL IaaS Agent extension. HereΓÇÖs the order of steps that you follow. We cover the steps with more detail later in the article.
+
+1. Determine whether you already have the required Azure permissions needed. You attempt to check to verify that registration is already done.
+1. If you don't have the required permissions, you must find someone in your organization that has the required permissions to help you.
+1. Complete the check to verify whether registration is already done for your subscriptions. If registration is done, you can go ahead to use centrally managed Azure Hybrid Benefit.
+1. If registration isnΓÇÖt complete, then you or the person assisting you need to choose one of the options to complete the registration.
+
+## Before you begin
+
+Normally, you can use the Azure portal to view Azure VMs that are running SQL Server on the [SQL virtual machines page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines). However, there are some situations where Azure can't detect that SQL Server is running in a virtual machine. The most common situation is when SQL Server VMs are created using custom images that run SQL Server 2014 or earlier. Or if the [SQL CEIP service](/sql/sql-server/usage-and-diagnostic-data-configuration-for-sql-server) is disabled or blocked.
+
+When the Azure portal doesn't detect SQL Server running on your VMs, it's a problem because you can't fully manage Azure SQL. In this situation, you can't verify that you have enough licenses needed to cover your SQL Server usage. Microsoft provides a way to resolve this problem with _SQL IaaS Agent extension registration_. At a high level, SQL IaaS Agent extension registration works in the following manner:
+
+1. You give Microsoft authorization to detect SQL VMs that aren't detected by default.
+2. The registration process runs at a subscription level or overall customer level. When registration completes, all current and future SQL VMs in the registration scope become visible.
+
+You must complete SQL IaaS Agent extension registration before you can use [centrally managed Azure Hybrid Benefit for SQL Server](create-sql-license-assignments.md). Otherwise, you can't use Azure to manage all your SQL Servers running in Azure.
+
+>[!NOTE]
+> Avoid using centralized managed Azure Hybrid Benefit for SQL Server before you complete SQL IaaS Agent extension registration. If you use centralized Azure Hybrid Benefit before you complete SQL IaaS Agent extension registration, new SQL VMs may not be covered by the number of licenses you have assigned. This situation could lead to incorrect license assignments and might result in unnecessary pay-as-you-go charges for SQL Server licenses. Complete SQL IaaS Agent extension registration before you use centralized Azure Hybrid Benefit features.
+
+## Scenarios and options
+
+The following sections help Cost Management users understand their options and the detailed steps for how to complete SQL IaaS Agent extension registration.
+
+## Determine your permissions
+
+You must have client credentials used to view or register your virtual machines with any of the following Azure roles:
+
+- **Virtual Machine contributor**
+- **Contributor**
+- **Owner**
+
+The permissions are required to perform the following procedure.
+
+## Inadequate permission
+
+If you donΓÇÖt have the required permission, get assistance from someone that has one of the required roles.
+
+## Complete the registration check
+
+1. Navigate to the [SQL virtual machines](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) page in the Azure portal.
+2. Select **Automatic SQL Server VM registration** to open the **Automatic registration** page.
+3. If automatic registration is already enabled, a message appears at the bottom of the page indicating `Automatic registration has already been enabled for subscription <SubscriptionName>`.
+4. Repeat this process for any other subscriptions that you want to manage with centralized Azure Hybrid Benefit.
+
+Alternatively, you can run a PowerShell script to determine if there are any unregistered SQL Servers in your environment. You can download the script from the [azure-hybrid-benefit](https://github.com/microsoft/sql-server-samples/tree/master/samples/manage/azure-hybrid-benefit) page on GitHub.
+
+## Options to complete registration
+
+If you determine that you have unregistered SQL Server VMs, use one of the two following methods to complete the registration:
+
+- [Register with the help of your Microsoft account team](#register-with-the-help-of-your-microsoft-account-team)
+- [Turn on SQL IaaS Agent extension automatic registration](#turn-on-sql-iaas-agent-extension-automatic-registration)
+
+### Register with the help of your Microsoft account team
+
+The most comprehensive way to register is at the overall customer level. For both of the following situations, contact your Microsoft account team.
+
+- Your Microsoft account team can help you add a small amendment that accomplishes the authorization in an overarching way if:
+ - If you have an Enterprise Agreement that's renewing soon
+ - If you're a Microsoft Customer Agreement Enterprise customer
+- If you have an Enterprise Agreement that isn't up for renewal, there's another option. A leader in your organization can use an email template to provide Microsoft with authorization.
+ >[!NOTE]
+ > This option is time-limited, so if you want to use it, you should investigate it soon.
+
+### Turn on SQL IaaS Agent extension automatic registration
+
+You can use the self-serve registration capability, described at [Automatic registration with SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms).
+
+Because of the way that roles and permissions work in Azure, including segregation of duties, you may not be able to access or complete the extension registration process yourself. If you're in that situation, you need find the subscription contributors for the scope you want to register. Then, get their help to complete the process.
+
+You can enable automatic registration in the Azure portal for a single subscription, or for multiple subscriptions suing the PowerShell script mentioned previously. We recommend that you complete the registration process for all of your subscriptions so you can view all of your Azure SQL infrastructure.
+
+The following [Managing Azure VMs with the SQL IaaS Agent Extension](https://www.youtube.com/watch?v=HqU0HH1vODg) video shows how the process works.
+
+>[!VIDEO https://www.youtube.com/embed/HqU0HH1vODg]
+
+## Registration duration and verification
+
+After you complete either of the preceding automatic registration options, it can take up to 48 hours to detect all your SQL Servers. When complete, all your SQL Server virtual machines should be visible on the [SQL virtual machines](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) page in the Azure portal.
+
+## When registration completes
+
+After you complete the SQL IaaS Extension registration, we recommended you use Azure Hybrid Benefit for centralized management. If you're unsure whether registration is finished, you can use the steps in [Complete the Registration Check](#complete-the-registration-check).
+
+## Next steps
+
+When you're ready, [Create SQL Server license assignments for Azure Hybrid Benefit](create-sql-license-assignments.md). Centrally managed Azure Hybrid Benefit is designed to make it easy to monitor your Azure SQL usage and optimize costs.
cost-management-billing Sql Server Hadr Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/sql-server-hadr-licenses.md
description: This article explains how the SQL Server HADR Software Assurance be
keywords: Previously updated : 12/06/2022 Last updated : 04/20/2022
cost-management-billing Transition Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/transition-existing.md
description: This article describes the changes and several transition scenarios
keywords: Previously updated : 12/06/2022 Last updated : 04/20/2023
When you assign licenses to a subscription using the new experience, changes are
When you enroll in the scope-level management of Azure Hybrid Benefit experience, youΓÇÖll see your current Azure Hybrid Benefit usage thatΓÇÖs enabled for individual resources. For more information on the overall experience, see [Create SQL Server license assignments for Azure Hybrid Benefit](create-sql-license-assignments.md). If you're a subscription contributor and you donΓÇÖt have the billing administrator role required, you can analyze the usage of different types of SQL Server licenses in Azure by using a PowerShell script. The script generates a snapshot of the usage across multiple subscriptions or the entire account. For details and examples of using the script, see the [sql-license-usage PowerShell script](https://github.com/anosov1960/sql-server-samples/tree/master/samples/manage/azure-hybrid-benefit) example script. Once youΓÇÖve run the script, identify and engage your billing administrator about the opportunity to shift Azure Hybrid Benefit management to the subscription or billing account scope level. > [!NOTE]
-> The script includes support for normalized core licenses (NCL).
+> The script includes support for normalized cores (NC).
## HADR benefit for SQL Server VMs
cost-management-billing Tutorial Azure Hybrid Benefits Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/tutorial-azure-hybrid-benefits-sql.md
Title: Tutorial - Optimize centrally managed Azure Hybrid Benefit for SQL Server
description: This tutorial guides you through proactively assigning SQL Server licenses in Azure to manage and optimize Azure Hybrid Benefit. Previously updated : 12/06/2022 Last updated : 04/20/2022
Before you begin, ensure that you:
Have read and understand the [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md) article. The article explains the types of SQL Server licenses that quality for Azure Hybrid Benefit. It also explains how to enable the benefit for the subscription or billing account scopes you select. > [!NOTE]
-> Managing Azure Hybrid Benefit centrally at a scope-level is currently in public preview and limited to enterprise customers and customers buying directly from Azure.com with a Microsoft Customer Agreement.
+> Managing Azure Hybrid Benefit centrally at a scope-level is limited to enterprise customers and customers buying directly from Azure.com with a Microsoft Customer Agreement.
Verify that your self-installed virtual machines running SQL Server in Azure are registered before you start to use the new experience. Doing so ensures that Azure resources that are running SQL Server are visible to you and Azure. For more information about registering SQL VMs in Azure, see [Register SQL Server VM with SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm) and [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk).
After you've read the preceding instructions in the article, you understand that
Then, do the following steps. 1. Use the preceding instructions to make sure self-installed SQL VMs are registered. They include talking to subscription owners to complete the registration for the subscriptions where you don't have sufficient permissions.
-1. You review Azure resource usage data from recent months and you talk to others in Contoso. You determine that 2000 SQL Server Enterprise Edition and 750 SQL Server Standard Edition core licenses, or 8750 normalized core licenses, are needed to cover expected Azure SQL usage for the next year. Expected usage also includes migrating workloads (1500 SQL Server Enterprise Edition + 750 SQL Server Standard Edition = 6750 normalized) and net new Azure SQL workloads (another 500 SQL Server Enterprise Edition or 2000 normalized core licenses).
+1. You review Azure resource usage data from recent months and you talk to others in Contoso. You determine that 2000 SQL Server Enterprise Edition and 750 SQL Server Standard Edition core licenses, or 8750 normalized cores, are needed to cover expected Azure SQL usage for the next year. Expected usage also includes migrating workloads (1500 SQL Server Enterprise Edition + 750 SQL Server Standard Edition = 6750 normalized) and net new Azure SQL workloads (another 500 SQL Server Enterprise Edition or 2000 normalized cores).
1. Next, confirm with your with procurement team that the needed licenses are already available or will soon be purchased. The confirmation ensures that the licenses are available to assign to Azure. - Licenses you have in use on premises can be considered available to assign to Azure if the associated workloads are being migrated to Azure. As mentioned previously, Azure Hybrid Benefit allows dual use for up to 180 days.
- - You determine that there are 1800 SQL Server Enterprise Edition licenses and 2000 SQL Server Standard Edition licenses available to assign to Azure. The available licenses equal 9200 normalized core licenses. That's a little more than the 8750 needed (2000 x 4 + 750 = 8750).
-1. Then, you assign the 1800 SQL Server Enterprise Edition and 2000 SQL Server Standard Edition to Azure. That action results in 9200 normalized core licenses that the system can apply to Azure SQL resources as they run each hour. Assigning more licenses than are required now provides a buffer if usage grows faster than you expect.
+ - You determine that there are 1800 SQL Server Enterprise Edition licenses and 2000 SQL Server Standard Edition licenses available to assign to Azure. The available licenses equal 9200 normalized cores. That's a little more than the 8750 needed (2000 x 4 + 750 = 8750).
+1. Then, you assign the 1800 SQL Server Enterprise Edition and 2000 SQL Server Standard Edition to Azure. That action results in 9200 normalized cores that the system can apply to Azure SQL resources as they run each hour. Assigning more licenses than are required now provides a buffer if usage grows faster than you expect.
Afterward, you monitor assigned license usage periodically, ideally monthly. After 10 months, usage approaches 95%, indicating faster Azure SQL usage growth than you expected. You talk to your procurement team to get more licenses so that you can assign them.
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
This article provides suggestions to troubleshoot common problems with the Azure
## Error code: SqlDeniedPublicAccess -- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Vritual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Auzre SQL Networking setting.`
+- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Vritual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Azure SQL Networking setting.`
- **Causes**: Azure SQL Database is set to deny public network access. This requires to use managed virtual network and create private endpoint to access.
data-factory Data Flow Flatten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flatten.md
Previously updated : 08/03/2022 Last updated : 04/21/2023 # Flatten transformation in mapping data flow
Use the flatten transformation to take array values inside hierarchical structur
## Configuration
-The flatten transformation contains the following configuration settings
+The flatten transformation contains the following configuration settings.
### Unroll by
-Select an array to unroll. The output data will have one row per item in each array. If the unroll by array in the input row is null or empty, there will be one output row with unrolled values as null.
+Select an array to unroll. The output data will have one row per item in each array. If the unroll by array in the input row is null or empty, there will be one output row with unrolled values as null. You have the option to unroll more than one array per Flatten transformation. Click on the plus (+) button to include multiple arrays in a single Flatten transformation. You can use ADF data flow meta functions here including ```name``` and ```type``` and use pattern matching to unroll arrays that match those criteria. When including multiple arrays in a single Flatten transformation, your results will be a cartesian product of all of the possible array values.
+ ### Unroll root
Optional setting that tells the service to handle all subcolumns of a complex ob
### Hierarchy level
-Choose the level of the hierarchy that you would like expand.
+Choose the level of the hierarchy that you would like to expand.
### Name matches (regex)
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
The Azure side includes the Data Factory mapping data flow that can transform an
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP CDC solution.":::
-To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime.
+To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime. A staging storage is required to be configured in data flow activity to make your self-hosted integration runtime work seamlessly with Data Flow integration runtime.
The SAP CDC connector uses the SAP ODP framework to extract various data source types, including:
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
In Azure Data Factory Studio, [create and configure a self-hosted integration ru
The more CPU cores you have on the computer running the self-hosted integration runtime, the higher your data extraction throughput is. For example, an internal test achieved a higher than 12-MB/s throughput when running parallel extractions on a self-hosted integration runtime computer that has 16 CPU cores.
+> [!NOTE]
+> If you want to use shared self hosted integration runtime from another Data Factory, you need to make sure your Data Factory is in the same region of another Data Factory. What is more, your Data Flow integration runtime need to be configured to "Auto Resolve" or the same region of your Data Factory.
+ ## Download and install the SAP .NET connector Download the latest [64-bit SAP .NET Connector (SAP NCo 3.0)](https://support.sap.com/en/product/connectors/msnet.html) and install it on the computer running the self-hosted integration runtime. During installation, in the **Optional setup steps** dialog, select **Install assemblies to GAC**, and then select **Next**.
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
access SQL Managed Instance from Managed VNET using Private Endpoint.
:::image type="content" source="./media/tutorial-managed-virtual-network/sql-mi-access-model.png" alt-text="Screenshot that shows the access model of SQL MI." lightbox="./media/tutorial-managed-virtual-network/sql-mi-access-model-expanded.png":::
+> [!NOTE]
+> When using this solution to connect to Azure SQL Database Managed Instance, **"Redirect"** connection policy is not supported, you need to switch to **"Proxy"** mode.
+++ ## Prerequisites * **Azure subscription**. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
data-manager-for-agri How To Set Up Sensors Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md
Hence to enable authentication & authorization, partners will need to do the fol
Partners can access the APIs in customer tenant using the multi-tenant Azure Active Directory App, registered in Azure Active Directory. App registration is done on the Azure portal so the Microsoft identity platform can provide authentication and authorization services for your application which in turn accesses Data Manager for Agriculture.
-Follow the steps provided in <a href="https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#register-an-application" target="_blank">App Registration</a> **until the Step 8** to generate the following information:
+Follow the steps provided in [App Registration](/azure/active-directory/develop/quickstart-register-app#register-an-application) **until the Step 8** to generate the following information:
1. **Application (client) ID** 2. **Directory (tenant) ID**
Copy and store all three values as you would need them for generating access tok
The Application (client) ID created is like the User ID of the application, and now you need to create its corresponding Application password (client secret) for the application to identify itself.
-Follow the steps provided in <a href="https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#add-a-client-secret" target="_blank">Add a client secret</a> to generate **Client Secret** and copy the client secret generated.
+Follow the steps provided in [Add a client secret](/azure/active-directory/develop/quickstart-register-app#add-a-client-secret) to generate **Client Secret** and copy the client secret generated.
### Registration
Based on the sensors that customers use and their respective sensor partnerΓÇÖs
Customers who choose to onboard to a specific partner will know the app ID of that specific partner. Now using the app ID customer will need to do the following things in sequence.
-1. **Consent** ΓÇô Since the partnerΓÇÖs app resides in a different tenant and the customer wants the partner to access certain APIs in their Data Manager for Agriculture instance, the customers are required to call a specific endpoint (https://login.microsoft.com/common/adminconsent/clientId=[client_id]) and replace the [client_id] with the partnersΓÇÖ app ID. This enables the customersΓÇÖ Azure Active Directory to recognize this APP ID whenever they use it for role assignment.
+1. **Consent** ΓÇô Since the partnerΓÇÖs app resides in a different tenant and the customer wants the partner to access certain APIs in their Data Manager for Agriculture instance, the customers are required to call a specific endpoint `https://login.microsoft.com/common/adminconsent/clientId=[client_id]` and replace the [client_id] with the partnersΓÇÖ app ID. This enables the customersΓÇÖ Azure Active Directory to recognize this APP ID whenever they use it for role assignment.
2. **Identity Access Management (IAM)** ΓÇô As part of Identity access management, customers will create a new role assignment to the above app ID which was provided consent. Data Manager for Agriculture will create a new role called Sensor Partner (In addition to the existing Admin, Contributor, Reader roles). Customers will choose the sensor partner role and add the partner app ID and provide access.
databox-online Azure Stack Edge Gpu Clustering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
Previously updated : 02/22/2022 Last updated : 04/18/2023
Before you configure clustering on your device, you must cable the devices as pe
1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource). 1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-gpu-deploy-install.md#cable-the-device). 1. Start cluster creation on the first node. Choose the network topology that conforms to the cabling across the two nodes. The chosen topology would dictate the storage and clustering traffic between the nodes. See detailed steps in [Configure network and web proxy on your device](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
-1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Get the authentication token on this node.
+1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Ensure that port settings match between same port name on each appliance. Get the authentication token on this node.
1. Use the authentication token from the prepared node and join this node to the first node to form a cluster. 1. Set up a cloud witness using an Azure Storage account or a local witness on an SMB fileshare. 1. Assign a virtual IP to provide an endpoint for Azure Consistent Services or when using NFS.
Before you configure clustering on your device, you must cable the devices as pe
1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource). 1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-pro-2-deploy-install.md#cable-the-device). 1. Start cluster creation on the first node. Choose the network topology that conforms to the cabling across the two nodes. The chosen topology would dictate the storage and clustering traffic between the nodes. See detailed steps in [Configure network and web proxy on your device](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md).
-1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Get the authentication token on this node.
+1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Ensure that port settings match between same port name on each appliance. Get the authentication token on this node.
1. Use the authentication token from the prepared node and join this node to the first node to form a cluster. 1. Set up a cloud witness using an Azure Storage account or a local witness on an SMB fileshare. 1. Assign a virtual IP to provide an endpoint for Azure Consistent Services or when using NFS.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 04/18/2023 Last updated : 04/20/2023 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High | | **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) | Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high | | **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Last updated 04/13/2023
# Reference list of attack paths and cloud security graph components
-This article lists the attack paths, connections, and insights used in Defender for Cloud Security Posture Management (CSPM).
+This article lists the attack paths, connections, and insights used in Defender Cloud Security Posture Management (CSPM).
- You need to [enable Defender CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths. - What you see in your environment depends on the resources you're protecting, and your customized configuration.
Learn more about [the cloud security graph, attack path analysis, and the cloud
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-manage-attack-path.md#availability) for attack paths.
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed VM has high severity vulnerabilities | A virtual machine is reachable from the internet and has high severity vulnerabilities. | | Internet exposed VM has high severity vulnerabilities and high permission to a subscription | A virtual machine is reachable from the internet, has high severity vulnerabilities, and identity and permission to a subscription. |
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
| VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. |
-### AWS Instances
+### AWS EC2 instances
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md).
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to an account. | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to a database. |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
### Azure data
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | | Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
### AWS data
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed AWS S3 Bucket with sensitive data is publicly accessible (Preview) | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | |Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed Kubernetes pod is running a container with RCE vulnerabilities | An internet exposed Kubernetes pod in a namespace is running a container using an image that has vulnerabilities allowing remote code execution. | | Kubernetes pod running on an internet exposed node uses host network is running a container with RCE vulnerabilities | A Kubernetes pod in a namespace with host network access enabled is exposed to the internet via the host network. The pod is running a container using an image that has vulnerabilities allowing remote code execution. |
Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md
Prerequisite: [Enable Defender for DevOps](defender-for-devops-introduction.md).
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed GitHub repository with plaintext secret is publicly accessible (Preview) | A GitHub repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. | ## Cloud security graph components list
-This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
+This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
### Insights
defender-for-cloud Defender For Storage Data Sensitivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-data-sensitivity.md
This is a configurable feature in the new Defender for Storage plan. You can cho
Learn more about [scope and limitations of sensitive data scanning](concept-data-security-posture-prepare.md).
-## How does the Sensitive Data Discovery work?
+## How does sensitive data discovery work?
-Sensitive Data Threat Detection is powered by the Sensitive Data Discovery engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
+Sensitive data threat detection is powered by the sensitive data discovery engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
The service is integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings. This ensures that the detection and protection of sensitive data aligns with your established policies and procedures. :::image type="content" source="media/defender-for-storage-data-sensitivity/data-sensitivity-cspm-storage.png" alt-text="Diagram showing how Defender CSPM and Defender for Storage combine to provide data-aware security.":::
-Upon enablement, the Sensitive Data Discovery engine initiates an automatic scanning process across all supported storage accounts. Results are typically generated within 24 hours. Additionally, newly created storage accounts under protected subscriptions will be scanned within six hours of their creation. Recurring scans are scheduled to occur weekly after the enablement date. This is the same Sensitive Data Discovery engine used for sensitive data discovery in Defender CSPM.
+Upon enablement, the engine initiates an automatic scanning process across all supported storage accounts. Results are typically generated within 24 hours. Additionally, newly created storage accounts under protected subscriptions are scanned within six hours of their creation. Recurring scans are scheduled to occur weekly after the enablement date. This is the same engine that Defender CSPM uses to discover sensitive data.
## Prerequisites
-Sensitive data threat detection is available for Blob storage accounts, including: Standard general-purpose V1, Standard general-purpose V2, Azure Data Lake Storage Gen2 and Premium block blobs. Learn more about the [availability of Defender for Storage features](defender-for-storage-introduction.md#availability).
+Sensitive data threat detection is available for Blob storage accounts, including: Standard general-purpose V1, Standard general-purpose V2, Azure Data Lake Storage Gen2, and Premium block blobs. Learn more about the [availability of Defender for Storage features](defender-for-storage-introduction.md#availability).
-To enable sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions.
-Learn more about the [roles and permissions](support-matrix-defender-for-storage.md) required for sensitive data threat detection.
+To enable sensitive data threat detection at subscription and storage account levels, you need to have the relevant data-related permissions from the **Subscription owner** or **Storage account owner** roles. Learn more about the [roles and permissions required for sensitive data threat detection](support-matrix-defender-for-storage.md).
## Enabling sensitive data threat detection
-Sensitive data threat detection is enabled by default when you enable Defender for Storage. You can [enable it or disable it](../storage/common/azure-defender-storage-configure.md) in the Azure portal or with other at-scale methods at no additional cost.
+Sensitive data threat detection is enabled by default when you enable Defender for Storage. You can [enable it or disable it](../storage/common/azure-defender-storage-configure.md) in the Azure portal or with other at-scale methods. This feature is included in the price of Defender for Storage.
## Using the sensitivity context in the security alerts
-Sensitive Data Threat Detection capability will help you to prioritize security incidents, allowing security teams to prioritize these incidents and respond on time. Defender for Storage alerts will include findings of sensitivity scanning and indications of operations that have been performed on resources containing sensitive data.
+The sensitive data threat detection capability helps security teams identify and prioritize data security incidents for faster response times. Defender for Storage alerts include findings of sensitivity scanning and indications of operations that have been performed on resources containing sensitive data.
-In the alertΓÇÖs Extended Properties, you can find sensitivity scanning findings for a **blob container**:ΓÇ»
+In the alertΓÇÖs extended properties, you can find sensitivity scanning findings for a **blob container**:ΓÇ»
- Sensitivity scanning time UTCΓÇ»- when the last scan was performed - Top sensitivity label - the most sensitive label found in the blob container
In the alertΓÇÖs Extended Properties, you can find sensitivity scanning findings
## Integrate with the organizational sensitivity settings in Microsoft Purview (optional)
-When you enable sensitive data threat detection, the sensitive data categories include built-in sensitive information types (SITs) default list of Microsoft Purview. This will affect the alerts you receive from Defender for Storage and storage or containers that are found to contain these SITs are marked as containing sensitive data.
+When you enable sensitive data threat detection, the sensitive data categories include built-in sensitive information types (SITs) in the default list of Microsoft Purview. This will affect the alerts you receive from Defender for Storage: storage or containers that are found with these SITs are marked as containing sensitive data.
To customize the Data Sensitivity Discovery for your organization, you can [create custom sensitive information types (SITs)](/microsoft-365/compliance/create-a-custom-sensitive-information-type) and connect to your organizational settings with a single step integration. Learn more [here](episode-two.md).
You also can create and publish sensitivity labels for your tenant in Microsoft
## Next steps
-In this article, you learned about Microsoft Defender for Storage.
+In this article, you learned about Microsoft Defender for Storage's sensitive data scanning.
> [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/18/2023 Last updated : 04/20/2023 # What's new in Microsoft Defender for Cloud?
Updates in April include:
- [Three alerts in the Defender for Resource Manager plan have been deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-have-been-deprecated) - [Alerts automatic export to Log Analytics workspace have been deprecated](#alerts-automatic-export-to-log-analytics-workspace-have-been-deprecated) - [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers)
+- [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services)
### Agentless Container Posture in Defender CSPM (Preview)
You can also view the [full list of alerts](alerts-reference.md#defender-for-ser
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
+### New Azure Active Directory authentication-related recommendations for Azure Data Services
+
+We have added four new Azure Active Directory authentication-related recommendations for Azure Data Services.
+
+| Recommendation Name | Recommendation Description | Policy |
+|--|--|--|
+| Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
+| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. [Learn more](https://aka.ms/Synapse). | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
+| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) |
+| Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
## March 2023 Updates in March include:
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score description: Description of Microsoft Defender for Cloud's secure score and its security controls Previously updated : 03/05/2023 Last updated : 04/20/2023 # Secure score
For more information, see [How your secure score is calculated](secure-score-sec
On the Security posture page, you're able to see the secure score for your entire subscription, and each environment in your subscription. By default all environments are shown. | Page section | Description | |--|--| | :::image type="content" source="media/secure-score-security-controls/select-environment.png" alt-text="Screenshot showing the different environment options."::: | Select your environment to see its secure score, and details. Multiple environments can be selected at once. The page will change based on your selection here.|
-| :::image type="content" source="media/secure-score-security-controls/environment.png" alt-text="Screenshot of the environment section of the security posture page."::: | Shows the total number of subscriptions, accounts and projects that affect your overall score. It also shows how many unhealthy resources and how many recommendations exist in your environments. |
+| :::image type="content" source="media/secure-score-security-controls/environment.png" alt-text="Screenshot of the environment section of the security posture page." lightbox="media/secure-score-security-controls/environment.png"::: | Shows the total number of subscriptions, accounts and projects that affect your overall score. It also shows how many unhealthy resources and how many recommendations exist in your environments. |
The bottom half of the page allows you to view and manage viewing the individual secure scores, number of unhealthy resources and even view the recommendations for all of your individual subscriptions, accounts, and projects.
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
To view your security policies in Defender for Cloud:
**Deny** prevents deployment of non-compliant resources based on recommendation logic.<br> **Disabled** prevents the recommendation from running.
- :::image type="content" source="./media/tutorial-security-policy/default-assignment-screen.png" alt-text="Screenshot showing the edit default assignment screen." lightbox="/media/tutorial-security-policy/default-assignment-screen.png":::
+ :::image type="content" source="./media/tutorial-security-policy/default-assignment-screen.png" alt-text="Screenshot showing the edit default assignment screen." lightbox="./media/tutorial-security-policy/default-assignment-screen.png":::
## Enable a security recommendation
This page explained security policies. For related information, see the followin
- [Learn how to set policies using PowerShell](../governance/policy/assign-policy-powershell.md) - [Learn how to edit a security policy in Azure Policy](../governance/policy/tutorials/create-and-manage.md) - [Learn how to set a policy across subscriptions or on Management groups using Azure Policy](../governance/policy/overview.md)-- [Learn how to enable Defender for Cloud on all subscriptions in a management group](onboard-management-group.md)
+- [Learn how to enable Defender for Cloud on all subscriptions in a management group](onboard-management-group.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 04/18/2023 Last updated : 04/20/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 |
-| [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 |
If you're looking for the latest release notes, you'll find them in the [What's
**Estimated date for change: April 2023**
-We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+We're announcing the full deprecation of support of [PCI DSS](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
We recommend updating custom scripts, workflows, and governance rules to corresp
We've improved the coverage of the V2 identity recommendations by scanning all Azure resources (rather than just subscriptions) which allows security administrators to view role assignments per account. These changes may result in changes to your Secure Score throughout the GA process.
-### Deprecation of legacy compliance standards across cloud environments
-
-**Estimated date for change: April 2023**
-
-We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
-
-Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [`PCI DSS v4`](/azure/compliance/offerings/offering-pci-dss) initiative.
-Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-
-### New Azure Active Directory authentication-related recommendations for Azure Data Services
-
-**Estimated date for change: April 2023**
-
-| Recommendation Name | Recommendation Description | Policy |
-|--|--|--|
-| Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
-| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
-| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) |
-| Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
- ### Multiple changes to identity recommendations **Estimated date for change: May 2023**
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--|
-| Date and Time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert |
+| Date and Time | Date and time that the syslog server machine received the information. |
| Hostname | Sensor IP | | Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--| | Priority | User.Alert |
-| Date and time | Date and time that sensor sent the information |
+| Date and time | Date and time that the sensor sent the information, in UTC format |
| Hostname | Sensor hostname | | Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert |
+| Date and time | Date and time that the sensor sent the information, in UTC format |
| Hostname | Sensor IP | | Message | Sensor name: The name of the Microsoft Defender for IoT appliance