Updates from: 04/22/2023 01:15:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Use the checklist to onboard your application quickly and customers have a smoot
> * Establish engineering and support contacts to guide customers post gallery onboarding (Required) > * 3 Non-expiring test credentials for your application (Required) > * Support the OAuth authorization code grant or a long lived token as described in the example (Required)
+> * OIDC apps must have at least 1 role (custom or default) defined
> * Establish an engineering and support point of contact to support customers post gallery onboarding (Required) > * [Support schema discovery (required)](https://tools.ietf.org/html/rfc7643#section-6) > * Support updating multiple group memberships with a single PATCH
active-directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/app-proxy-protect-ndes.md
Azure AD Application Proxy is built on Azure. It gives you a massive amount of n
1. Select **+Add** to save your application. 1. Test whether you can access your NDES server via the Azure AD Application proxy by pasting the link you copied in step 15 into a browser. You should see a default IIS welcome page.- 1. As a final test, add the *mscep.dll* path to the existing URL you pasted in the previous step:-
- `https://scep-test93635307549127448334.msappproxy.net/certsrv/mscep/mscep.dll`
-
+ `https://scep-test93635307549127448334.msappproxy.net/certsrv/mscep/mscep.dll`
1. You should see an **HTTP Error 403 ΓÇô Forbidden** response.- 1. Change the NDES URL provided (via Microsoft Intune) to devices. This change could either be in Microsoft Configuration Manager or the Microsoft Intune admin center.-
- * For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge.
- * For Intune standalone, either edit or create a new SCEP policy and add the new URL.
+ - For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge.
+ - For Intune standalone, either edit or create a new SCEP policy and add the new URL.
## Next steps
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
The Authenticator app can help prevent unauthorized access to accounts and stop
![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png)
+In some rare instances where the relevant Google or Apple service responsible for push notifications is down, users may not receive their push notifications. In these cases users should manually navigate to the Microsoft Authenticator app (or relevant companion app like Outlook), refresh by either pulling down or hitting the refresh button, and approve the request.
+ > [!NOTE] > If your organization has staff working in or traveling to China, the *Notification through mobile app* method on Android devices doesn't work in that country/region as Google play services(including push notifications) are blocked in the region. However iOS notification do work. For Android devices ,alternate authentication methods should be made available for those users.
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
If enabled for Authenticator Lite, users are prompted to register their account
GET auditLogs/signIns ```
-If the sign-in was done by phone app notification, under **authenticationAppDeivceDetails** the **clientApp** field returns **microsoftAuthenticator** or **Outlook**.
+If the sign-in was done by phone app notification, under **authenticationAppDeviceDetails** the **clientApp** field returns **microsoftAuthenticator** or **Outlook**.
If a user has registered Authenticator Lite, the userΓÇÖs registered authentication methods include **Microsoft Authenticator (in Outlook)**.
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Previously updated : 03/13/2023 Last updated : 04/18/2023
-# Microsoft Enterprise SSO plug-in for Apple devices (preview)
-
-> [!IMPORTANT]
-> This feature is in public preview. This preview is provided without a service-level agreement. For more information, see [Supplemental terms of use for Microsoft Azure public previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Microsoft Enterprise SSO plug-in for Apple devices
The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [enterprise single sign-on](https://developer.apple.com/documentation/authenticationservices) feature. The plug-in provides SSO for even old applications that your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection available.
To use the Microsoft Enterprise SSO plug-in for Apple devices:
### iOS requirements - iOS 13.0 or higher must be installed on the device.-- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications are the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. This app is the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
### macOS requirements - macOS 10.15 or higher must be installed on the device. -- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications include the [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp).
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. This app is the [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp).
## Enable the SSO plug-in
active-directory Spa Quickstart Portal Angular Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-angular-ciam.md
+
+ Title: "Quickstart: Add sign in to a Angular SPA"
+description: Learn how to run a sample Angular SPA to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for Angular SPA
+
+> In this quickstart, you download and run a code sample that demonstrates how a Angular single-page application (SPA) can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
+>
+> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> ```console
+> npm install && npm start
+> ```
+> 1. Open your browser, visit `http://locahost:4200`, select **Sign-in**, then follow the prompts.
+>
active-directory Spa Quickstart Portal React Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-react-ciam.md
+
+ Title: "Quickstart: Add sign in to a React SPA"
+description: Learn how to run a sample React SPA to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for React SPA
+
+> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
+>
+> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> ```console
+> npm install && npm start
+> ```
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts.
+>
active-directory Spa Quickstart Portal Vanilla Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-vanilla-js-ciam.md
+
+ Title: "Quickstart: Add sign in to a JavaScript SPA"
+description: Learn how to run a sample JavaScript SPA to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for JavaScript application
+
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript SPA can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
+>
+> 1. Unzip the sample, `cd` into the app root folder, then run the following commands:
+> ```console
+> cd App && npm install && npm start
+> ```
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts.
+>
active-directory Web App Quickstart Portal Dotnet Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-dotnet-ciam.md
+
+ Title: "Quickstart: Add sign in to ASP.NET web app"
+description: Learn how to run a sample ASP.NET web app to sign in users
+++++++++ Last updated : 05/05/2023++
+# Portal quickstart for ASP.NET web app
+
+> In this quickstart, you download and run a code sample that demonstrates how ASP.NET web app can sign in users with Azure Active Directory for customers.
+>
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed Make sure you've installed [.NET SDK v7](https://dotnet.microsoft.com/download/dotnet/7.0) or later.
+>
+> 1. Unzip the sample, `cd` into the app root folder, then run the following command:
+> ```console
+> dotnet run
+> ```
+> 1. Open your browser, visit `https://locahost:7274`, select **Sign-in**, then follow the prompts.
+>
active-directory Web App Quickstart Portal Node Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md
Title: "Quickstart: Add sign in to a React SPA"
-description: Learn how to run a sample React SPA to sign in users
+ Title: "Quickstart: Add sign in to a Node.js/Express web app"
+description: Learn how to run a sample Node.js/Express web app to sign in users
Previously updated : 04/12/2023 Last updated : 05/05/2023
-# Portal quickstart for React SPA
+# Portal quickstart for Node.js/Express web app
-> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM.
+> In this quickstart, you download and run a code sample that demonstrates how a Node.js/Express web app can sign in users with Azure Active Directory for customers.
> > [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
Last updated 04/12/2023
> ```console > npm install && npm start > ```
-> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in** link, then follow the prompts.
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts.
>
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
A PRT is invalidated in the following scenarios:
* **Invalid user**: If a user is deleted or disabled in Azure AD, their PRT is invalidated and can't be used to obtain tokens for applications. If a deleted or disabled user already signed in to a device before, cached sign-in would log them in, until CloudAP is aware of their invalid state. Once CloudAP determines that the user is invalid, it blocks subsequent logons. An invalid user is automatically blocked from sign in to new devices that donΓÇÖt have their credentials cached. * **Invalid device**: If a device is deleted or disabled in Azure AD, the PRT obtained on that device is invalidated and can't be used to obtain tokens for other applications. If a user is already signed in to an invalid device, they can continue to do so. But all tokens on the device are invalidated and the user doesn't have SSO to any resources from that device.
-* **Password change**: After a user changes their password, the PRT obtained with the previous password is invalidated by Azure AD. Password change results in the user getting a new PRT. This invalidation can happen in two different ways:
+* **Password change**: If a user obtained the PRT with their password, the PRT is invalidated by Azure AD when the user changes their password. Password change results in the user getting a new PRT. This invalidation can happen in two different ways:
* If user signs in to Windows with their new password, CloudAP discards the old PRT and requests Azure AD to issue a new PRT with their new password. If user doesn't have an internet connection, the new password can't be validated, Windows may require the user to enter their old password. * If a user has logged in with their old password or changed their password after signing into Windows, the old PRT is used for any WAM-based token requests. In this scenario, the user is prompted to reauthenticate during the WAM token request and a new PRT is issued. * **TPM issues**: Sometimes, a deviceΓÇÖs TPM can falter or fail, leading to inaccessibility of keys secured by the TPM. In this case, the device is incapable of getting a PRT or requesting tokens using an existing PRT as it can't prove possession of the cryptographic keys. As a result, any existing PRT is invalidated by Azure AD. When Windows 10 detects a failure, it initiates a recovery flow to re-register the device with new cryptographic keys. With Hybrid Azure Ad join, just like the initial registration, the recovery happens silently without user input. For Azure AD joined or Azure AD registered devices, the recovery needs to be performed by a user who has administrator privileges on the device. In this scenario, the recovery flow is initiated by a Windows prompt that guides the user to successfully recover the device.
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
+
+ Title: Use Windows Local Administrator Password Solution (LAPS) with Azure AD (preview)
+description: Manage your device's local administrator password with Azure AD LAPS.
+++++ Last updated : 04/21/2023++++++++
+# Windows Local Administrator Password Solution in Azure AD (preview)
+
+> [!IMPORTANT]
+> Azure AD support for Windows Local Administrator Password Solution is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Every Windows device comes with a built-in local administrator account that you must secure and protect to mitigate any Pass-the-Hash (PtH) and lateral traversal attacks. Many customers have been using our standalone, on-premises [Local Administrator Password Solution (LAPS)](https://www.microsoft.com/download/details.aspx?id=46899) product for local administrator password management of their domain joined Windows machines. With Azure AD support for Windows LAPS, we're providing a consistent experience for both Azure AD joined and hybrid Azure AD joined devices.
+
+Azure AD support for LAPS includes the following capabilities:
+
+- **Enabling Windows LAPS with Azure AD** - Enable a tenant wide policy and a client-side policy to backup local administrator password to Azure AD.
+- **Local administrator password management** - Configure client-side policies to set account name, password age, length, complexity, manual password reset and so on.
+- **Recovering local administrator password** - Use API/Portal experiences for local administrator password recovery.
+- **Enumerating all Windows LAPS enabled devices** - Use API/Portal experiences to enumerate all Windows devices in Azure AD enabled with Windows LAPS.
+- **Authorization of local administrator password recovery** - Use role based access control (RBAC) policies with custom roles and administrative units.
+- **Auditing local administrator password update and recovery** - Use audit logs API/Portal experiences to monitor password update and recovery events.
+- **Conditional Access policies for local administrator password recovery** - Configure Conditional Access policies on directory roles that have the authorization of password recovery.
+
+> [!NOTE]
+> Windows LAPS with Azure AD is not supported for Windows devices that are [Azure AD registered](concept-azure-ad-register.md).
+
+Local Administrator Password Solution isn't supported on non-Windows platforms.
+
+To learn about Windows LAPS in more detail, start with the following articles in the Windows documentation:
+
+- [What is Windows LAPS?](/windows-server/identity/laps/laps-scenarios-azure-active-directory) ΓÇô Introduction to Windows LAPS and the Windows LAPS documentation set.
+- [Windows LAPS CSP](/windows/client-management/mdm/laps-csp) ΓÇô View the full details for LAPS settings and options. Intune policy for LAPS uses these settings to configure the LAPS CSP on devices.
+- [Microsoft Intune support for Windows LAPS](/mem/intune/protect/windows-laps-overview)
+- [Windows LAPS architecture](/windows-server/identity/laps/laps-concepts#windows-laps-architecture)
+
+## Requirements
+
+### Supported Azure regions and Windows distributions
+
+This feature is now available in the following Azure clouds:
+
+- Azure Global
+- Azure Government
+- Azure China 21Vianet
+
+### Operating system updates
+
+This feature is now available on the following Windows OS platforms with the specified update or later installed:
+
+- [Windows 11 22H2 - April 11 2023 Update](https://support.microsoft.com/help/5025239)
+- [Windows 11 21H2 - April 11 2023 Update](https://support.microsoft.com/help/5025224)
+- [Windows 10 20H2, 21H2 and 22H2 - April 11 2023 Update](https://support.microsoft.com/help/5025221)
+- [Windows Server 2022 - April 11 2023 Update](https://support.microsoft.com/help/5025230)
+- [Windows Server 2019 - April 11 2023 Update](https://support.microsoft.com/help/5025229)
+
+### Join types
+
+LAPS is supported on Azure AD joined or hybrid Azure AD joined devices only. Azure AD registered devices aren't supported.
+
+### License requirements
+
+LAPS is available to all customers with Azure AD Free or higher licenses. Other related features like administrative units, custom roles, Conditional Access, and Intune have other licensing requirements.
+
+### Required roles or permission
+
+Other than the built-in Azure AD roles of Cloud Device Administrator, Intune Administrator, and Global Administrator that are granted *device.LocalCredentials.Read.All*, you can use [Azure AD custom roles](/azure/active-directory/roles/custom-create) or administrative units to authorize local administrator password recovery. For example,
+
+- Custom roles must be assigned the *microsoft.directory/deviceLocalCredentials/password/read* permission to authorize local administrator password recovery. During the preview, you must create a custom role and grant permissions using the [Microsoft Graph API](/azure/active-directory/roles/custom-create#create-a-role-with-the-microsoft-graph-api) or [PowerShell](/azure/active-directory/roles/custom-create#create-a-role-using-powershell). Once you have created the custom role, you can assign it to users.
+
+- You can also create an Azure AD [administrative unit](/azure/active-directory/roles/administrative-units), add devices, and assign the Cloud Device Administrator role scoped to the administrative unit to authorize local administrator password recovery.
+
+## Enabling Windows LAPS with Azure AD
+
+To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the devices you wish to manage. We recommend organizations [manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy). However, if your devices are Azure AD joined but you're not using Microsoft Intune or Microsoft Intune isn't supported (like for Windows Server 2019/2022), you can still deploy Windows LAPS for Azure AD manually. For more information, see the article [Configure Windows LAPS policy settings](/windows-server/identity/laps/laps-management-policy-settings).
+
+1. Sign in to the **Azure portal** as a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Browse to **Azure Active Directory** > **Devices** > **Device settings**
+1. Select **Yes** for the Enable Local Administrator Password Solution (LAPS) setting and select **Save**. You may also use the Microsoft Graph API [Update deviceRegistrationPolicy](/graph/api/deviceregistrationpolicy-update?view=graph-rest-beta&preserve-view=true).
+1. Configure a client-side policy and set the **BackUpDirectory** to be Azure AD.
+
+ - If you're using Microsoft Intune to manage client side policies, see [Manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy)
+ - If you're using Group Policy Objects (GPO) to manage client side policies, see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy)
+
+## Recovering local administrator password
+
+To view the local administrator password for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read.All* permission, and you must be assigned one of the following roles:
+
+- [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator)
+- [Intune Service Administrator](../roles/permissions-reference.md#intune-administrator)
+- [Global Administrator](../roles/permissions-reference.md#global-administrator)
+
+You can also use Microsoft Graph API [Get deviceLocalCredentialInfo](/graph/api/devicelocalcredentialinfo-get?view=graph-rest-beta&preserve-view=true) to recover local administrative password. If you use the Microsoft Graph API, the password returned is in Base64 encoded value that you need to decode before using it.
+
+## List all Windows LAPS enable devices
+
+To list all Windows LAPS enabled devices in Azure AD, you can browse to **Azure Active Directory** > **Devices** > **Local administrator password recovery (Preview)** or use the Microsoft Graph API.
+
+## Auditing local administrator password update and recovery
+
+To view audit events, you can browse to **Azure Active Directory** > **Devices** > **Audit logs**, then use the **Activity** filter and search for **Update device local administrator password** or **Recover device local administrator password** to view the audit events.
+
+## Conditional Access policies for local administrator password recovery
+
+Conditional Access policies can be scoped to the built-in roles like Cloud Device Administrator, Intune Administrator, and Global Administrator to protect access to recover local administrator passwords. You can find an example of a policy that requires multifactor authentication in the article, [Common Conditional Access policy: Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md).
+
+> [!NOTE]
+> Other role types including administrative unit-scoped roles and custom roles aren't supported
+
+## Frequently asked questions
+
+### Is Windows LAPS with Azure AD management configuration supported using Group Policy Objects (GPO)?
+
+Yes, for [hybrid Azure AD joined](concept-azure-ad-join-hybrid.md) devices only. See see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy).
+
+### Is Windows LAPS with Azure AD management configuration supported using MDM?
+
+Yes, for [Azure AD join](concept-azure-ad-join.md)/[hybrid Azure AD join](concept-azure-ad-join-hybrid.md) ([co-managed](/mem/configmgr/comanage/overview)) devices. Customers can use [Microsoft Intune](/mem/intune/protect/windows-laps-overview) or any other third party MDM of their choice.
+
+### What happens when a device is deleted in Azure AD?
+
+When a device is deleted in Azure AD, the LAPS credential that was tied to that device is lost and the password that is stored in Azure AD is lost. Unless you have a custom workflow to retrieve LAPS passwords and store them externally, there's no method in Azure AD to recover the LAPS managed password for a deleted device.
+
+### What roles are needed to recover LAPS passwords?
+
+The following built-in roles Azure AD roles have permission to recover LAPS passwords: Global Administrator, Cloud Device Administrator, and Intune Administrator.
+
+### What roles are needed to read LAPS metadata?
+
+The following built-in roles are supported to view metadata about LAPS including the device name, last password rotation, and next password rotation: Global Administrator, Cloud Device Administrator, Intune Administrator, Helpdesk Administrator, Security Reader, Security Administrator, and Global Reader.
+
+### Are custom roles supported?
+
+Yes. If you have Azure AD Premium, you can create a custom role with the following RBAC permissions:
+
+- To read LAPS metadata: *microsoft.directory/deviceLocalCredentials/standard/read*
+- To read LAPS passwords: *microsoft.directory/deviceLocalCredentials/password/read*
+
+### What happens when the local administrator account specified by policy is changed?
+
+Because Windows LAPS can only manage one local admin account on a device at a time, the original account is no longer managed by LAPS policy. If policy has the device back up that account, the new account is backed up and details about the previous account are no longer available from within the Intune admin center or from the Directory that is specified to store the account information.
+
+## Next steps
+
+- [Choosing a device identity](overview.md#modern-device-scenario)
+- [Microsoft Intune support for Windows LAPS](/mem/intune/protect/windows-laps-overview)
+- [Create policy for LAPS](/mem/intune/protect/windows-laps-policy)
+- [View reports for LAPS](/mem/intune/protect/windows-laps-reports)
+- [Account protection policy for endpoint security in Intune](/mem/intune/protect/endpoint-security-account-protection-policy)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 04/17/2023 Last updated : 04/20/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on April 17th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on April 20th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | Microsoft Defender for Endpoint | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint P1 | DEFENDER_ENDPOINT_P1 | 16a55f2f-ff35-4cd5-9146-fb784e3761a5 | Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) |
+| Microsoft Defender for Endpoint P1 for EDU | DEFENDER_ENDPOINT_P1_EDU | bba890d4-7881-4584-8102-0c3fdfb739a7 | MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) |
| Microsoft Defender for Endpoint P2_XPLAT | MDATP_XPLAT | b126b073-72db-4a9d-87a4-b17afe41d4ab | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Office 365 (Plan 1) Faculty | ATP_ENTERPRISE_FACULTY | 26ad4b5c-b686-462e-84b9-d7c22b46837f | ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Stream | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | | Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | | Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) |
-| Microsoft Teams Audio Conferencing select dial-out | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) |
+| Microsoft Teams Audio Conferencing with dial-out to USA/CAN | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) |
| Microsoft Teams (Free) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | Microsoft Teams Essentials | Teams_Ess | fde42873-30b6-436b-b361-21af5a6b84ae | TeamsEss (f4f2f6de-6830-442b-a433-e92249faebe2) | Microsoft Teams Essentials (f4f2f6de-6830-442b-a433-e92249faebe2) | | Microsoft Teams Essentials (AAD Identity) | TEAMS_ESSENTIALS_AAD | 3ab6abff-666f-4424-bfb7-f0bc274ec7bc | EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>ONEDRIVE_BASIC_P2 (4495894f-534f-41ca-9d3b-0ebf1220a423)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf) | Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OneDrive for Business (Basic 2) (4495894f-534f-41ca-9d3b-0ebf1220a423)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf) |
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
Previously updated : 10/12/2022 Last updated : 04/21/2023
# Add Azure Active Directory B2B collaboration users in the Azure portal
-As a user who is assigned any of the limited administrator directory roles, you can use the Azure portal to invite B2B collaboration users. You can invite guest users to the directory, to a group, or to an application. After you invite a user through any of these methods, the invited user's account is added to Azure Active Directory (Azure AD), with a user type of *Guest*. The guest user must then redeem their invitation to access resources. An invitation of a user does not expire.
+As a user who is assigned any of the limited administrator directory roles, you can use the Azure portal to invite B2B collaboration users. You can invite guest users to the directory, to a group, or to an application. After you invite a user through any of these methods, the invited user's account is added to Azure Active Directory (Azure AD), with a user type of *Guest*. The guest user must then redeem their invitation to access resources. An invitation of a user doesn't expire.
After you add a guest user to the directory, you can either send the guest user a direct link to a shared app, or the guest user can select the redemption URL in the invitation email. For more information about the redemption process, see [B2B collaboration invitation redemption](redemption-experience.md). > [!IMPORTANT] > You should follow the steps in [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md) to add the URL of your organization's privacy statement. As part of the first time invitation redemption process, an invited user must consent to your privacy terms to continue.
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users-azure-active-directory.md) article.
+ ## Before you begin Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. By default, all users and admins can invite guests. But your organization's external collaboration policies might be configured to prevent certain types of users or admins from inviting guests. To find out how to view and set these policies, see [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md).
Make sure your organization's external collaboration settings are configured suc
To add B2B collaboration users to the directory, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a user who is assigned a limited administrator directory role or the Guest Inviter role.
-2. Search for and select **Azure Active Directory** from any page.
-3. Under **Manage**, select **Users**.
-4. Select **New user** > **Invite external user**. (Or, if you're using the legacy experience, select **New guest user**).
-5. On the **New user** page, select **Invite user** and then add the guest user's information.
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users.
+
+1. Navigate to **Azure Active Directory** > **Users**.
+
+1. Select **Invite external user** from the menu.
+
+ ![Screenshot of the invite external user menu option.](media/add-users-administrator/invite-external-user-menu.png)
+
+### Basics
+
+In this section, you're inviting the guest to your tenant using *their email address*. If you need to create a guest user with a domain account, use the [create new user process](../fundamentals/how-to-create-delete-users.md#create-a-new-user) but change the **User type** to **Guest**.
+
+- **Email**: Enter the email address for the guest user you're inviting.
+
+- **Display name**: Provide the display name.
+
+- **Invitation message**: Select the **Send invite message** checkbox to customize a brief message to the guest. Provide a Cc recipient, if necessary.
+
+![Screenshot of the invite external user Basics tab.](media/add-users-administrator/invite-external-user-basics-tab.png)
+
+Either select the **Review + invite** button to create the new user or **Next: Properties** to complete the next section.
+
+### Properties
+
+There are six categories of user properties you can provide. These properties can be added or updated after the user is created. To manage these details, go to **Azure AD** > **Users** and select a user to update.
+
+- **Identity:** Enter the user's first and last name. Set the User type as either Member or Guest. For more information about the difference between external guests and members, see [B2B collaboration user properties](user-properties.md)
+
+- **Job information:** Add any job-related information, such as the user's job title, department, or manager.
+
+- **Contact information:** Add any relevant contact information for the user.
+
+- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority.
+
+- **Settings:** Specify the user's global location.
+
+Either select the **Review + invite** button to create the new user or **Next: Assignments** to complete the next section.
+
+### Assignments
- ![Screenshot showing the new user page.](media/add-users-administrator/invite-user.png)
+You can assign external users to a group, or Azure AD role when the account is created. You can assign the user to up to 20 groups or roles. Group and role assignments can be added after the user is created. The **Privileged Role Administrator** role is required to assign Azure AD roles.
- - **Name.** The first and last name of the guest user.
- - **Email address (required)**. The email address of the guest user.
- - **Personal message (optional)** Include a personal welcome message to the guest user.
- - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**. [Learn more](../../role-based-access-control/role-assignments-external-users.md) about Azure roles for external guest users.
+**To assign a group to the new user**:
+
+1. Select **+ Add group**.
+1. From the menu that appears, choose up to 20 groups from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+ ![Screenshot of the add group assignment process.](media/add-users-administrator/invite-external-user-assignments-tab.png)
+
+**To assign a role to the new user**:
+
+1. Select **+ Add role**.
+1. From the menu that appears, choose up to 20 roles from the list and select the **Select** button.
+1. Select the **Review + invite** button.
+
+### Review and create
+
+The final tab captures several key details from the user creation process. Review the details and select the **Invite** button if everything looks good. An email invitation is automatically sent to the user. After you send the invitation, the user account is automatically added to the directory as a guest.
+
+ ![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+
+### External user invitations
+<a name="resend-invitations-to-guest-users"></a>
+
+When you invite an external guest user by sending an email invitation, you can check the status of the invitation from the user's details. If they haven't redeemed their invitation, you can resend the invitation email.
+
+1. Go to **Azure AD** > **Users** and select the invited guest user.
+1. In the **My Feed** section, locate the **B2B collaboration** tile.
+ - If the invitation state is **PendingAcceptance**, select the **Resend invitation** link to send another email and follow the prompts.
+ - You can also select the **Properties** for the user and view the **Invitation state**.
+
+![Screenshot of the My Feed section of the user overview page.](media/add-users-administrator/external-user-invitation-state.png)
> [!NOTE] > Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol.
-6. Select **Invite** to automatically send the invitation to the guest user.
-
-After you send the invitation, the user account is automatically added to the directory as a guest.
- ![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+The user is added to your directory with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example: *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations. ([Learn more about B2B collaboration user properties](user-properties.md).)
-The user is added to your directory with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*, for example, *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations. ([Learn more about B2B collaboration user properties](user-properties.md).)
## Add guest users to a group
-If you need to manually add B2B collaboration users to a group, follow these steps:
+
+If you need to manually add B2B collaboration users to a group after the user was invited, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Search for and select **Azure Active Directory** from any page.
If you need to manually add B2B collaboration users to a group, follow these ste
4. Select a group (or select **New group** to create a new one). It's a good idea to include in the group description that the group contains B2B guest users. 5. Under **Manage**, select **Members**. 6. Select **Add members**.
-7. Do one of the following:
+7. Complete one of the following set of steps:
- *If the guest user is already in the directory:*
To add B2B collaboration users to an application, follow these steps:
5. Under **Manage**, select **Users and groups**. 6. Select **Add user/group**. 7. On the **Add Assignment** page, select the link under **Users**.
-8. Do one of the following:
+8. Complete one of the following set of steps:
- *If the guest user is already in the directory:*
To add B2B collaboration users to an application, follow these steps:
d. Select **Assign**.
-## Resend invitations to guest users
-
-If a guest user hasn't yet redeemed their invitation, you can resend the invitation email.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator.
-2. Search for and select **Azure Active Directory** from any page.
-3. Under **Manage**, select **Users**.
-4. In the list, select the user's name to open their user profile.
-5. Under **My Feed**, in the **B2B collaboration** tile, select the **Manage (resend invitation / reset status** link.
-6. If the user hasn't yet accepted the invitation, Select the **Yes** option to resend.
-
- ![Screenshot showing the Resend Invite radio button.](./media/add-users-administrator/resend-invitation.png)
-
-7. In the confirmation message, select **Yes** to confirm that you want to send the user a new email invitation for redeeming their guest account. An invitation URL will be generated and sent to the user.
- ## Next steps - To learn how non-Azure AD admins can add B2B guest users, see [How users in your organization can invite guest users to an app](add-users-information-worker.md)
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Previously updated : 02/16/2023 Last updated : 04/21/2023
In this quickstart, you'll learn how to add a new guest user to your Azure AD di
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users-azure-active-directory.md) article.
+ ## Prerequisites To complete the scenario in this quickstart, you need: -- A role that allows you to create users in your tenant directory, such as the Global Administrator role or a limited administrator directory role (for example, Guest inviter or User administrator).
+- A role that allows you to create users in your tenant directory, such as the Global Administrator role or a limited administrator directory role such as Guest Inviter or User Administrator.
- Access to a valid email address outside of your Azure AD tenant, such as a separate work, school, or social email address. You'll use this email to create the guest account in your tenant directory and access the invitation.
-## Add a new guest user in Azure AD
+## Invite an external guest user
-1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that's been assigned the Global administrator, Guest, inviter, or User administrator role.
+This quickstart guide provides the basic steps to invite an external user. To learn about all of the properties and settings that you can include when you invite an external user, see [How to create and delete a user](../fundamentals/how-to-create-delete-users.md).
-1. Under **Azure services**, select **Azure Active Directory** (or use the search box to find and select **Azure Active Directory**).
+1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the roles listed in the Prerequisites.
- :::image type="content" source="media/quickstart-add-users-portal/azure-active-directory-service.png" alt-text="Screenshot showing where to select the Azure Active Directory service.":::
+1. Navigate to **Azure Active Directory** > **Users**.
-1. Under **Manage**, select **Users**.
+1. Select **Invite external user** from the menu.
+
+ ![Screenshot of the invite external user menu option.](media/quickstart-add-users-portal/invite-external-user-menu.png)
+
+### Basics for external users
- :::image type="content" source="media/quickstart-add-users-portal/quickstart-users-portal-user.png" alt-text="Screenshot showing where to select the Users option.":::
+In this section, you're inviting the guest to your tenant using *their email address*. For this quickstart, enter an email address that you can access.
-1. Under **New user** select **Invite external user**.
+- **Email**: Enter the email address for the guest user you're inviting.
- :::image type="content" source="media/quickstart-add-users-portal/new-guest-user.png" alt-text="Screenshot showing where to select the New guest user option.":::
+- **Display name**: Provide the display name.
-1. On the **New user** page, select **Invite user** and then add the guest user's information.
+- **Invitation message**: Select the **Send invite message** checkbox to customize a brief message to preview how the invitation message appears.
- - **Name.** The first and last name of the guest user.
- - **Email address (required)**. The email address of the guest user.
- - **Personal message (optional)** Include a personal welcome message to the guest user.
- - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
+![Screenshot of the invite external user Basics tab.](media/quickstart-add-users-portal/invite-external-user-basics-tab.png)
- :::image type="content" source="media/quickstart-add-users-portal/invite-user.png" alt-text="Screenshot showing the new user page.":::
+Select the **Review and invite** button to finalize the process.
-1. Select **Invite** to automatically send the invitation to the guest user. A notification appears in the upper right with the message **Successfully invited user**.
+### Review and invite
+
+The final tab captures several key details from the user creation process. Review the details and select the **Invite** button if everything looks good.
+
+An email invitation is sent automatically.
1. After you send the invitation, the user account is automatically added to the directory as a guest. :::image type="content" source="media/quickstart-add-users-portal/new-guest-user-directory.png" alt-text="Screenshot showing the new guest user in the directory."::: - ## Accept the invitation Now sign in as the guest user to see the invitation.
Now sign in as the guest user to see the invitation.
:::image type="content" source="media/quickstart-add-users-portal/quickstart-users-portal-email-small.png" alt-text="Screenshot showing the B2B invitation email."::: - 1. In the email body, select **Accept invitation**. A **Review permissions** page opens in the browser. :::image type="content" source="media/quickstart-add-users-portal/consent-screen.png" alt-text="Screenshot showing the Review permissions page.":::
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
Connect-MgGraph -Scopes "User.ReadWrite.All"
$user = Get-MgUser -Filter "startsWith(mail, 'john.doe@fabrikam.net')" New-MgInvitation ` -InvitedUserEmailAddress $user.Mail `
- -InviteRedirectUrl "http://myapps.microsoft.com" `
+ -InviteRedirectUrl "https://myapps.microsoft.com" `
-ResetRedemption ` -SendInvitationMessage ` -InvitedUser $user
active-directory Certificate Authorities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/certificate-authorities.md
- Title: Azure Active Directory certificate authorities
-description: Listing of trusted certificates used in Azure
-------- Previously updated : 10/10/2020------
-# Certificate authorities used by Azure Active Directory
-
-> [!IMPORTANT]
-> The information in this page is relevant only to entities that explicitly specify a list of acceptable Certificate Authorities (CAs). This practice, known as certificate pinning, should be avoided unless there are no other options.
-
-Any entity trying to access Azure Active Directory (Azure AD) identity services via the TLS/SSL protocols will be presented with certificates from the CAs listed below. If the entity trusts those CAs, it may use the certificates to verify the identity and legitimacy of the identity services and establish secure connections.
-
-Certificate Authorities can be classified into root CAs and intermediate CAs. Typically, root CAs have one or more associated intermediate CAs. This article lists the root CAs used by Azure AD identity services and the intermediate CAs associated with each of those roots. For each CA, we include Uniform Resource Identifiers (URIs) to download the associated Authority Information Access (AIA) and the Certificate Revocation List Distribution Point (CDP) files. When appropriate, we also provide a URI to the Online Certificate Status Protocol (OCSP) endpoint.
-
-## CAs used in Azure Public and Azure US Government clouds
-
-Different services may use different root or intermediate CAs. Therefore all entries listed below may be required.
-
-### DigiCert Global Root G2
--
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - |- |-|-|-|-|
-| DigiCert Global Root G2| 033af1e6a711a 9a0bb2864b11d09fae5| August 1, 2013 <br>January 15, 2038| df3c24f9bfd666761b268 073fe06d1cc8d4f82a4| [AIA](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt)<br>[CDP](http://crl3.digicert.com/DigiCertGlobalRootG2.crl) |
--
-#### Associated Intermediate CAs
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| Microsoft Azure TLS Issuing CA 01| 0aafa6c5ca63c45141 ea3be1f7c75317| July 29, 2020<br>June 27, 2024| 2f2877c5d778c31e0f29c 7e371df5471bd673173| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.crl)|
-|Microsoft Azure TLS Issuing CA 02| 0c6ae97cced59983 8690a00a9ea53214| July 29, 2020<br>June 27, 2024| e7eea674ca718e3befd 90858e09f8372ad0ae2aa| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.crl) |
-| Microsoft Azure TLS Issuing CA 05| 0d7bede97d8209967a 52631b8bdd18bd| July 29, 2020<br>June 27, 2024| 6c3af02e7f269aa73a fd0eff2a88a4a1f04ed1e5| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005%20-%20xsign.crt)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.crl) |
-| Microsoft Azure TLS Issuing CA 06| 02e79171fb8021e93fe 2d983834c50c0| July 29, 2020<br>June 27, 2024| 30e01761ab97e59a06b 41ef20af6f2de7ef4f7b0| [AIA](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer)<br>[CDP](https://www.microsoft.com/pkiops/crl/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.crl) |
--
- ### Baltimore CyberTrust Root
-
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| Baltimore CyberTrust Root| 020000b9| May 12, 2000<br>May 12, 2025| d4de20d05e66fc53fe 1a50882c78db2852cae474|<br>[CDP](http://crl3.digicert.com/Omniroot2025.crl)<br>[OCSP](http://ocsp.digicert.com/) |
--
-#### Associated Intermediate CAs
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| Microsoft RSA TLS CA 01| 703d7a8f0ebf55aaa 59f98eaf4a206004eb2516a| July 21, 2020<br>October 8, 2024| 417e225037fbfaa4f9 5761d5ae729e1aea7e3a42| [AIA](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2001.crt)<br>[CDP](https://mscrl.microsoft.com/pki/mscorp/crl/Microsoft%20RSA%20TLS%20CA%2001.crl)<br>[OCSP](http://ocsp.msocsp.com/) |
-| Microsoft RSA TLS CA 02| b0c2d2d13cdd56cdaa 6ab6e2c04440be4a429c75| July 21, 2020<br>May 20, 2024| 54d9d20239080c32316ed 9ff980a48988f4adf2d| [AIA](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2002.crt)<br>[CDP](https://mscrl.microsoft.com/pki/mscorp/crl/Microsoft%20RSA%20TLS%20CA%2002.crl)<br>[OCSP](http://ocsp.msocsp.com/) |
--
- ### DigiCert Global Root CA
-
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| DigiCert Global Root CA| 083be056904246 b1a1756ac95991c74a| November 9, 2006<br>November 9, 2031| a8985d3a65e5e5c4b2d7 d66d40c6dd2fb19c5436| [CDP](http://crl3.digicert.com/DigiCertGlobalRootCA.crl)<br>[OCSP](http://ocsp.digicert.com/) |
--
-#### Associated Intermediate CAs
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| DigiCert SHA2 Secure Server CA| 01fda3eb6eca75c 888438b724bcfbc91| March 8, 2013 March 8, 2023| 1fb86b1168ec743154062 e8c9cc5b171a4b7ccb4| [AIA](http://cacerts.digicert.com/DigiCertSHA2SecureServerCA.crt)<br>[CDP](http://crl3.digicert.com/ssca-sha2-g6.crl)<br>[OCSP](http://ocsp.digicert.com/) |
-| DigiCert SHA2 Secure Server CA |02742eaa17ca8e21 c717bb1ffcfd0ca0 |September 22, 2020<br>September 22, 2030|626d44e704d1ceabe3bf 0d53397464ac8080142c|[AIA](http://cacerts.digicert.com/DigiCertSHA2SecureServerCA-2.crt)<br>[CDP](http://crl3.digicert.com/DigiCertSHA2SecureServerCA.crl)<br>[OCSP](http://ocsp.digicert.com/)|
--
-## CAs used in Azure China 21Vianet cloud
-
-### DigiCert Global Root CA
--
-| Root CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - |
-| DigiCert Global Root CA| 083be056904246b 1a1756ac95991c74a| Nov. 9, 2006<br>Nov. 9, 2031| a8985d3a65e5e5c4b2d7 d66d40c6dd2fb19c5436| [CDP](http://ocsp.digicert.com/)<br>[OCSP](http://crl3.digicert.com/DigiCertGlobalRootCA.crl) |
--
-#### Associated Intermediate CA
-
-| Issuing and Intermediate CA| Serial Number| Issue Date Expiration Date| SHA1 Thumbprint| URIs |
-| - | - | - | - | - | - |
-| DigiCert Basic RSA CN CA G2| 02f7e1f982bad 009aff47dc95741b2f6| March 4, 2020<br>March 4, 2030| 4d1fa5d1fb1ac3917c08e 43f65015e6aea571179| [AIA](http://cacerts.digicert.cn/DigiCertBasicRSACNCAG2.crt)<br>[CDP](http://crl.digicert.cn/DigiCertBasicRSACNCAG2.crl)<br>[OCSP](http://ocsp.digicert.cn/) |
-
-## Next Steps
-[Learn about Microsoft 365 Encryption chains](/microsoft-365/compliance/encryption-office-365-certificate-chains)
active-directory How To Create Delete Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-create-delete-users.md
+
+ Title: Create or delete users
+description: Instructions for how to create new users or delete existing users.
+++++++ Last updated : 04/21/2023++++++
+# How to create, invite, and delete users (preview)
+
+This article explains how to create a new user, invite an external guest, and delete a user in your Azure Active Directory (Azure AD) tenant.
+
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Instructions for the legacy create user process can be found in the [Add or delete users](add-users-azure-active-directory.md) article.
++
+## Before you begin
+
+Before you create or invite a new user, take some time to review the types of users, their authentication methods, and their access within the Azure AD tenant. For example, do you need to create an internal guest, an internal user, or an external guest? Does your new user need guest or member privileges?
+
+- **Internal member**: These users are most likely full-time employees in your organization.
+- **Internal guest**: These users have an account in your tenant, but have guest-level privileges. It's possible they were created within your tenant prior to the availability of B2B collaboration.
+- **External member**: These users authenticate using an external account, but have member access to your tenant. These types of users are common in [multi-tenant organizations](../multi-tenant-organizations/overview.md#what-is-a-multi-tenant-organization).
+- **External guest**: These users are true guests of your tenant who authenticate using an external method and who have guest-level privileges.
+
+For more information abut the differences between internal and external guests and members, see [B2B collaboration properties](../external-identities/user-properties.md).
+
+Authentication methods vary based on the type of user you create. Internal guests and members have credentials in your Azure AD tenant that can be managed by administrators. These users can also reset their own password. External members authenticate to their home Azure AD tenant and your Azure AD tenant authenticates the user through a federated sign-in with the external member's Azure AD tenant. If external members forget their password, the administrator in their Azure AD tenant can reset their password. External guests set up their own password using the link they receive in email when their account is created.
+
+Reviewing the default user permissions may also help you determine the type of user you need to create. For more information, see [Set default user permissions](users-default-permissions.md)
+
+## Required roles
+
+The required role of least privilege varies based on the type of user you're adding and if you need to assign Azure AD roles at the same time. **Global Administrator** can create users and assign roles, but whenever possible you should use the least privileged role.
+
+| Role | Task |
+| -- | -- |
+| Create a new user | User Administrator |
+| Invite an external guest | Guest Inviter |
+| Assign Azure AD roles | Privileged Role Administrator |
+
+## Create a new user
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role.
+
+1. Navigate to **Azure Active Directory** > **Users**.
+
+1. Select **Create new user** from the menu.
+
+ ![Screenshot of the create new user menu.](media/how-to-create-delete-users/create-new-user-menu.png)
+
+### Basics
+
+The **Basics** tab contains the core fields required to create a new user.
+
+- **User principal name**: Enter a unique username and select a domain from the menu after the @ symbol. Select **Domain not listed** if you need to create a new domain. For more information, see [Add your custom domain name](add-custom-domain.md)
+
+- **Mail nickname**: If you need to enter an email nickname that is different from the user principal name you entered, uncheck the **Derive from user principal name** option, then enter the mail nickname.
+
+- **Display name**: Enter the user's name, such as Chris Green or Chris A. Green
+
+- **Password**: Provide a password for the user to use during their initial sign-in. Uncheck the **Auto-generate password** option to enter a different password.
+
+- **Account enabled**: This option is checked by default. Uncheck to prevent the new user from being able to sign-in. You can change this setting after the user is created. This setting was called **Block sign in** in the legacy create user process.
+
+Either select the **Review + create** button to create the new user or **Next: Properties** to complete the next section.
+
+![Screenshot of the create new user Basics tab.](media/how-to-create-delete-users/create-new-user-basics-tab.png)
+
+Either select the **Review + create** button to create the new user or **Next: Properties** to complete the next section.
+
+### Properties
+
+There are six categories of user properties you can provide. These properties can be added or updated after the user is created. To manage these details, go to **Azure AD** > **Users** and select a user to update.
+
+- **Identity:** Enter the user's first and last name. Set the User type as either Member or Guest.
+
+- **Job information:** Add any job-related information, such as the user's job title, department, or manager.
+
+- **Contact information:** Add any relevant contact information for the user.
+
+- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority.
+
+- **Settings:** Specify the user's global location.
+
+Either select the **Review + create** button to create the new user or **Next: Assignments** to complete the next section.
+
+### Assignments
+
+You can assign the user to an administrative unit, group, or Azure AD role when the account is created. You can assign the user to up to 20 groups or roles. You can only assign the user to one administrative unit. Assignments can be added after the user is created.
+
+**To assign a group to the new user**:
+
+1. Select **+ Add group**.
+1. From the menu that appears, choose up to 20 groups from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+ ![Screenshot of the add group assignment process.](media/how-to-create-delete-users/add-group-assignment.png)
+
+**To assign a role to the new user**:
+
+1. Select **+ Add role**.
+1. From the menu that appears, choose up to 20 roles from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+**To add an administrative unit to the new user**:
+
+1. Select **+ Add administrative unit**.
+1. From the menu that appears, choose one administrative unit from the list and select the **Select** button.
+1. Select the **Review + create** button.
+
+### Review and create
+
+The final tab captures several key details from the user creation process. Review the details and select the **Create** button if everything looks good.
+
+## Invite an external user
+
+The overall process for inviting an external guest user is similar, except for a few details on the **Basics** tab and the email invitation process. You can't assign external users to administrative units.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the **User Administrator** role. A role with Guest Inviter privileges can also invite external users.
+
+1. Navigate to **Azure Active Directory** > **Users**.
+
+1. Select **Invite external user** from the menu.
+
+ ![Screenshot of the invite external user menu option.](media/how-to-create-delete-users/invite-external-user-menu.png)
+
+### Basics for external users
+
+In this section, you're inviting the guest to your tenant using *their email address*. If you need to create a guest user with a domain account, use the [create new user process](#create-a-new-user) but change the **User type** to **Guest**.
+
+- **Email**: Enter the email address for the guest user you're inviting.
+
+- **Display name**: Provide the display name.
+
+- **Invitation message**: Select the **Send invite message** checkbox to customize a brief message to the guest. Provide a Cc recipient, if necessary.
+
+![Screenshot of the invite external user Basics tab.](media/how-to-create-delete-users/invite-external-user-basics-tab.png)
+
+### Guest user invitations
+
+When you invite an external guest user by sending an email invitation, you can check the status of the invitation from the user's details.
+
+1. Go to **Azure AD** > **Users** and select the invited guest user.
+1. In the **My Feed** section, locate the **B2B collaboration** tile.
+ - If the invitation state is **PendingAcceptance**, select the **Resend invitation** link to send another email.
+ - You can also select the **Properties** for the user and view the **Invitation state**.
+
+![Screenshot of the user details with the invitation status options highlighted.](media/how-to-create-delete-users/external-user-invitation-state.png)
+
+## Add other users
+
+There might be scenarios in which you want to manually create consumer accounts in your Azure Active Directory B2C (Azure AD B2C) directory. For more information about creating consumer accounts, see [Create and delete consumer users in Azure AD B2C](../../active-directory-b2c/manage-users-portal.md).
+
+If you have an environment with both Azure Active Directory (cloud) and Windows Server Active Directory (on-premises), you can add new users by syncing the existing user account data. For more information about hybrid environments and users, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
+
+## Delete a user
+
+You can delete an existing user using Azure portal.
+
+- You must have a Global Administrator, Privileged Authentication Administrator, or User Administrator role assignment to delete users in your organization.
+- Global Administrators and Privileged Authentication Administrators can delete any users including other administrators.
+- User Administrators can delete any non-admin users, Helpdesk Administrators, and other User Administrators.
+- For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+
+To delete a user, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) using one of the appropriate roles.
+
+1. Go to **Azure Active Directory** > **Users**.
+
+1. Search for and select the user you want to delete from your Azure AD tenant.
+
+1. Select **Delete user**.
+
+ ![Screenshot of the All users page with a user selected and the Delete button highlighted.](media/how-to-create-delete-users/delete-existing-user.png)
+
+The user is deleted and no longer appears on the **Users - All users** page. The user can be seen on the **Deleted users** page for the next 30 days and can be restored during that time. For more information about restoring a user, see [Restore or remove a recently deleted user using Azure Active Directory](active-directory-users-restore.md).
+
+When a user is deleted, any licenses consumed by the user are made available for other users.
+
+>[!Note]
+>To update the identity, contact information, or job information for users whose source of authority is Windows Server Active Directory, you must use Windows Server Active Directory. After you complete the update, you must wait for the next synchronization cycle to complete before you'll see the changes.
+## Next steps
+
+* [Learn about B2B collaboration users](../external-identities/add-users-administrator.md)
+* [Review the default user permissions](users-default-permissions.md)
+* [Add a custom domain](add-custom-domain.md)
active-directory How To Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md
The branding elements are called out in the following example. Text descriptions
1. **Favicon**: Small icon that appears on the left side of the browser tab. 1. **Header logo**: Space across the top of the web page, below the web browser navigation area.
-1. **Background image** and **page background color**: The entire space behind the sign-in box.
+1. **Background image**: The entire space behind the sign-in box.
+1. **Page background color**: The entire space behind the sign-in box.
1. **Banner logo**: The logo that appears in the upper-left corner of the sign-in box. 1. **Username hint and text**: The text that appears before a user enters their information. 1. **Sign-in page text**: Additional text you can add below the username field.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
- Deprecated functionality - Plans for changes ++
+## October 2022
+
+### General Availability - Upgrade Azure AD Provisioning agent to the latest version (version number: 1.1.977.0)
+++
+**Type:** Plan for change
+**Service category:** Provisioning
+**Product capability:** Azure AD Connect Cloud Sync
+
+Microsoft stops support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, make sure you have the latest version of the agent. You can view info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
+
+You can find out which version of the agent you're using as follows:
+
+1. Going to the domain server that you have the agent installed
+1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
+1. Select on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
+
+> [!NOTE]
+> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
+Product governed by the Modern Policy follow a [continuous support and servicing model](/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported. For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
+++
+### General Availability - Add multiple domains to the same SAML/Ws-Fed based identity provider configuration for your external users
+++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+An IT admin can now add multiple domains to a single SAML/WS-Fed identity provider configuration to invite users from multiple domains to authenticate from the same identity provider endpoint. For more information, see: [Federation with SAML/WS-Fed identity providers for guest users](../external-identities/direct-federation.md).
++++
+### General Availability - Limits on the number of configured API permissions for an application registration enforced starting in October 2022
+++
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** Developer Experience
+
+In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit are unable to increase the number of permissions configured for. The existing limit on the number of distinct APIs for permissions required remains unchanged and may not exceed 50 APIs.
+
+In the Azure portal, the required permissions list is under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions list is in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md).
++++
+### Public Preview - Conditional access Authentication strengths
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** User Authentication
+
+We're announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
+++
+### Public Preview - Conditional access authentication strengths for external identities
++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+You can now require your business partner (B2B) guests across all Microsoft clouds to use specific authentication methods to access your resources with **Conditional Access Authentication Strength policies**. For more information, see: [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
++++
+### Generally Availability - Windows Hello for Business, Cloud Kerberos Trust deployment
+++
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
+++
+### General Availability - Device-based conditional access on Linux Desktops
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** SSO
+
+This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
+
+- Users can register their Linux devices with Azure AD
+- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops
+- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.
++
+For more information, see:
+[Azure AD registered devices](../devices/concept-azure-ad-register.md).
+[Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
+++
+### General Availability - Deprecation of Azure Active Directory Multi-Factor Authentication.
+++
+**Type:** Deprecated
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+Beginning September 30, 2024, Azure Active Directory Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their usersΓÇÖ authentication data to the cloud-based Azure Active Directory Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure Active Directory Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md).
+++
+### Public Preview - Lifecycle Workflows is now available
+++
+**Type:** New feature
+**Service category:** Lifecycle Workflows
+**Product capability:** Identity Governance
++
+We're excited to announce the public preview of Lifecycle Workflows, a new Identity Governance capability that allows customers to extend the user provisioning process, and adds enterprise grade user lifecycle management capabilities, in Azure AD to modernize your identity lifecycle management process. With Lifecycle Workflows, you can:
+
+- Confidently configure and deploy custom workflows to onboard and offboard cloud employees at scale replacing your manual processes.
+- Automate out-of-the-box actions critical to required Joiner and Leaver scenarios and get rich reporting insights.
+- Extend workflows via Logic Apps integrations with custom tasks extensions for more complex scenarios.
+
+For more information, see: [What are Lifecycle Workflows? (Public Preview)](../governance/what-are-lifecycle-workflows.md).
+++
+### Public Preview - User-to-Group Affiliation recommendation for group Access Reviews
+++
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and applies the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
+++
+### General Availability - Group assignment for SuccessFactors Writeback application
+++
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+
+When configuring writeback of attributes from Azure AD to SAP SuccessFactors Employee Central, you can now specify the scope of users using Azure AD group assignment. For more information, see: [Tutorial: Configure attribute write-back from Azure AD to SAP SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md).
+++
+### General Availability - Number Matching for Microsoft Authenticator notifications
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
+
+The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting February 27 2023.
++
+For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
+++
+### General Availability - Additional context in Microsoft Authenticator notifications
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following steps:
+
+- Application Context: This feature shows users which application they're signing into.
+- Geographic Location Context: This feature shows users their sign-in location based on the IP address of the device they're signing into.
+
+The feature is available for both MFA and Password-less Phone Sign-in notifications and greatly increases the security posture of the Microsoft Authenticator app. We've also refreshed the Azure portal Admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update, we've also added the highly requested ability for admins to exclude user groups from certain features.
+
+We highly encourage our customers to adopt these critical security features to reduce accidental approvals of Authenticator notifications by end users.
++
+For more information, see: [How to use additional context in Microsoft Authenticator notifications - Authentication methods policy](../authentication/how-to-mfa-additional-context.md).
+++
+### New Federated Apps available in Azure AD Application gallery - October 2022
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+++
+In October 2022 we've added the following 15 new applications in our App gallery with Federation support:
+
+[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++++
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - October 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [LawVu](../saas-apps/lawvu-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++ ## September 2022
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information, see: [How to use number matching in multifactor authentica
Earlier, we announced our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD), enabling our customers to reach the Azure AD services over IPv4, IPv6 or dual stack endpoints. This is just a reminder that we have started introducing IPv6 support into Azure AD services in a phased approach in late March 2023.
-If you utilize Conditional Access or Identity Protection, and have IPv6 enabled on any of your devices, you likely must take action to avoid impacting your users. For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to deprioritize IPv4 in any Azure AD features or services. We'll continue to share additional guidance on IPv6 enablement in Azure AD at this link: [IPv6 support in Azure Active Directory](https://learn.microsoft.com/troubleshoot/azure/active-directory/azure-ad-ipv6-support)
+If you utilize Conditional Access or Identity Protection, and have IPv6 enabled on any of your devices, you likely must take action to avoid impacting your users. For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to deprioritize IPv4 in any Azure AD features or services. We'll continue to share additional guidance on IPv6 enablement in Azure AD at this link: [IPv6 support in Azure Active Directory](/troubleshoot/azure/active-directory/azure-ad-ipv6-support).
Microsoft cloud settings let you collaborate with organizations from different M
- Microsoft Azure commercial and Microsoft Azure Government - Microsoft Azure commercial and Microsoft Azure China 21Vianet
-For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+For more information about Microsoft cloud settings for B2B collaboration, see [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
We continue to share additional guidance on IPv6 enablement in Azure AD at this
-## October 2022
-
-### General Availability - Upgrade Azure AD Provisioning agent to the latest version (version number: 1.1.977.0)
---
-**Type:** Plan for change
-**Service category:** Provisioning
-**Product capability:** Azure AD Connect Cloud Sync
-
-Microsoft stops support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, make sure you have the latest version of the agent. You can view info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
-
-You can find out which version of the agent you're using as follows:
-
-1. Going to the domain server that you have the agent installed
-1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
-1. Select on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
-
-> [!NOTE]
-> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
-Product governed by the Modern Policy follow a [continuous support and servicing model](/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported. For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
---
-### General Availability - Add multiple domains to the same SAML/Ws-Fed based identity provider configuration for your external users
---
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-An IT admin can now add multiple domains to a single SAML/WS-Fed identity provider configuration to invite users from multiple domains to authenticate from the same identity provider endpoint. For more information, see: [Federation with SAML/WS-Fed identity providers for guest users](../external-identities/direct-federation.md).
----
-### General Availability - Limits on the number of configured API permissions for an application registration enforced starting in October 2022
---
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** Developer Experience
-
-In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit are unable to increase the number of permissions configured for. The existing limit on the number of distinct APIs for permissions required remains unchanged and may not exceed 50 APIs.
-
-In the Azure portal, the required permissions list is under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions list is in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md).
----
-### Public Preview - Conditional access Authentication strengths
---
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** User Authentication
-
-We're announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
---
-### Public Preview - Conditional access authentication strengths for external identities
--
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-You can now require your business partner (B2B) guests across all Microsoft clouds to use specific authentication methods to access your resources with **Conditional Access Authentication Strength policies**. For more information, see: [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
----
-### Generally Availability - Windows Hello for Business, Cloud Kerberos Trust deployment
---
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
---
-### General Availability - Device-based conditional access on Linux Desktops
---
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** SSO
-
-This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
--- Users can register their Linux devices with Azure AD-- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops -- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.--
-For more information, see:
-[Azure AD registered devices](../devices/concept-azure-ad-register.md).
-[Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
---
-### General Availability - Deprecation of Azure Active Directory Multi-Factor Authentication.
---
-**Type:** Deprecated
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-Beginning September 30, 2024, Azure Active Directory Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their usersΓÇÖ authentication data to the cloud-based Azure Active Directory Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure Active Directory Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md).
---
-### Public Preview - Lifecycle Workflows is now available
---
-**Type:** New feature
-**Service category:** Lifecycle Workflows
-**Product capability:** Identity Governance
--
-We're excited to announce the public preview of Lifecycle Workflows, a new Identity Governance capability that allows customers to extend the user provisioning process, and adds enterprise grade user lifecycle management capabilities, in Azure AD to modernize your identity lifecycle management process. With Lifecycle Workflows, you can:
--- Confidently configure and deploy custom workflows to onboard and offboard cloud employees at scale replacing your manual processes.-- Automate out-of-the-box actions critical to required Joiner and Leaver scenarios and get rich reporting insights.-- Extend workflows via Logic Apps integrations with custom tasks extensions for more complex scenarios.-
-For more information, see: [What are Lifecycle Workflows? (Public Preview)](../governance/what-are-lifecycle-workflows.md).
---
-### Public Preview - User-to-Group Affiliation recommendation for group Access Reviews
---
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and applies the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
---
-### General Availability - Group assignment for SuccessFactors Writeback application
---
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Outbound to SaaS Applications
-
-When configuring writeback of attributes from Azure AD to SAP SuccessFactors Employee Central, you can now specify the scope of users using Azure AD group assignment. For more information, see: [Tutorial: Configure attribute write-back from Azure AD to SAP SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md).
---
-### General Availability - Number Matching for Microsoft Authenticator notifications
---
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
-
-The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting February 27 2023.
--
-For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
---
-### General Availability - Additional context in Microsoft Authenticator notifications
---
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following steps:
--- Application Context: This feature shows users which application they're signing into.-- Geographic Location Context: This feature shows users their sign-in location based on the IP address of the device they're signing into. -
-The feature is available for both MFA and Password-less Phone Sign-in notifications and greatly increases the security posture of the Microsoft Authenticator app. We've also refreshed the Azure portal Admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update, we've also added the highly requested ability for admins to exclude user groups from certain features.
-
-We highly encourage our customers to adopt these critical security features to reduce accidental approvals of Authenticator notifications by end users.
--
-For more information, see: [How to use additional context in Microsoft Authenticator notifications - Authentication methods policy](../authentication/how-to-mfa-additional-context.md).
---
-### New Federated Apps available in Azure AD Application gallery - October 2022
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
---
-In October 2022 we've added the following 15 new applications in our App gallery with Federation support:
-
-[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
-
-For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
-----
-### Public preview - New provisioning connectors in the Azure AD Application Gallery - October 2022
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [LawVu](../saas-apps/lawvu-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
------
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
For more information on expressions, see [Reference for writing expressions for
The expression examples above use endDate for SAP and StatusHireDate for Workday. However, you may opt to use different attributes.
-For example, you might use StatusContinuesFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be:
+For example, you might use StatusContinuousFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be:
- `FormatDateTime([StatusContinuesFirstDayOfWork], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")`
+ `FormatDateTime([StatusContinuousFirstDayOfWork], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")`
The following table has a list of suggested attributes and their scenario recommendations.
active-directory Protected Actions Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-add.md
Previously updated : 04/10/2022 Last updated : 04/21/2023 # Add, test, or remove protected actions in Azure AD (preview)
Protected actions use a Conditional Access authentication context, so you must c
1. Create a new policy and select your authentication context.
- For more information, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md).
+ For more information, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context).
:::image type="content" source="media/protected-actions-add/policy-authentication-context.png" alt-text="Screenshot of New policy page to create a new policy with an authentication context." lightbox="media/protected-actions-add/policy-authentication-context.png":::
Protected actions use a Conditional Access authentication context, so you must c
To add protection actions, assign a Conditional Access policy to one or more permissions using a Conditional Access authentication context.
+1. Select **Azure Active Directory** > **Protect & secure** > **Conditional Access** > **Policies**.
+
+1. Make sure the state of the Conditional Access policy that you plan to use with your protected action is set to **On** and not **Off** or **Report-only**.
+ 1. Select **Azure Active Directory** > **Roles & admins** > **Protected actions (Preview)**. :::image type="content" source="media/protected-actions-add/protected-actions-start.png" alt-text="Screenshot of Add protected actions page in Roles and administrators." lightbox="media/protected-actions-add/protected-actions-start.png":::
The user has previously satisfied policy. For example, the completed multifactor
Check the [Azure AD sign-in events](../conditional-access/troubleshoot-conditional-access.md) to troubleshoot. The sign-in events will include details about the session, including if the user has already completed multifactor authentication. When troubleshooting with the sign-in logs, it's also helpful to check the policy details page, to confirm an authentication context was requested.
+### Symptom - Policy is never satisfied
+
+When you attempt to perform the requirements for the Conditional Access policy, the policy is never satisfied and you keep getting requested to reauthenticate.
+
+**Cause**
+
+The Conditional Access policy wasn't created or the policy state is **Off** or **Report-only**.
+
+**Solution**
+
+Create the Conditional Access policy if it doesn't exist or and set the state to **On**.
+
+If you aren't able to access the Conditional Access page because of the protected action and repeated requests to reauthenticate, use the following link to open the Conditional Access page.
+
+- [https://aka.ms/MSALProtectedActions](https://aka.ms/MSALProtectedActions)
+ ### Symptom - No access to add protected actions When signed in you don't have permissions to add or remove protected actions.
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
Here's the initial set of permissions:
## How do protected actions compare with Privileged Identity Management role activation?
-[Privileged Identity Management role activation](../privileged-identity-management/pim-how-to-change-default-settings.md) can also be assigned Conditional Access policies. This capability allows for policy enforcement only when a user activates a role, providing the most comprehensive protection. Protected actions are enforced only when a user takes an action that requires permissions with Conditional Access policy assigned to it. Protected actions allows for high impact permissions to be protected, independent of a user role. Privileged Identity Management role activation and protected actions can be used together, for the strongest coverage.
+[Privileged Identity Management role activation](../privileged-identity-management/pim-how-to-change-default-settings.md) can also be assigned Conditional Access policies. This capability allows for policy enforcement only when a user activates a role, providing the most comprehensive protection. Protected actions are enforced only when a user takes an action that requires permissions with Conditional Access policy assigned to it. Protected actions allow for high impact permissions to be protected, independent of a user role. Privileged Identity Management role activation and protected actions can be used together for stronger coverage.
## Steps to use protected actions
Here's the initial set of permissions:
1. **Configure Conditional Access policy**
- Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context)
+ Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](./protected-actions-add.md#configure-conditional-access-policy)
1. **Add protected actions**
active-directory Contentkalender Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentkalender-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/21/2023 # Tutorial: Azure AD SSO integration with Contentkalender
-In this tutorial, you'll learn how to integrate Contentkalender with Azure Active Directory (Azure AD). When you integrate Contentkalender with Azure AD, you can:
+In this tutorial, you learn how to integrate Contentkalender with Azure Active Directory (Azure AD). When you integrate Contentkalender with Azure AD, you can:
* Control in Azure AD who has access to Contentkalender. * Enable your users to be automatically signed-in to Contentkalender with their Azure AD accounts.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** text box, type one of the following URLs:
-
- | **Identifier** |
- ||
- | `https://login.contentkalender.nl` |
- | `https://contentkalender-acc.bettywebblocks.com/` (only for testing purposes)|
-
- b. In the **Reply URL** text box, type one of the following URLs:
-
- | **Reply URL** |
- |--|
- | `https://login.contentkalender.nl/sso/saml/callback` |
- | `https://contentkalender-acc.bettywebblocks.com/sso/saml/callback` (only for testing purposes)|
+ a. In the **Identifier** text box, type the URL:
+ `https://login.contentkalender.nl`
+ b. In the **Reply URL** text box, type the URL:
+ `https://login.contentkalender.nl/sso/saml/callback`
+
c. In the **Sign-on URL** text box, type the URL: `https://login.contentkalender.nl/v2/login`
active-directory Fcm Hub Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fcm-hub-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with FCM HUB'
+ Title: 'Tutorial: Azure Active Directory SSO integration with FCM HUB'
description: Learn how to configure single sign-on between Azure Active Directory and FCM HUB.
Previously updated : 11/21/2022 Last updated : 04/19/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with FCM HUB
+# Tutorial: Azure Active Directory SSO integration with FCM HUB
-In this tutorial, you'll learn how to integrate FCM HUB with Azure Active Directory (Azure AD). When you integrate FCM HUB with Azure AD, you can:
+In this tutorial, you learn how to integrate FCM HUB with Azure Active Directory (Azure AD). When you integrate FCM HUB with Azure AD, you can:
* Control in Azure AD who has access to FCM HUB. * Enable your users to be automatically signed-in to FCM HUB with their Azure AD accounts.
Follow these steps to enable Azure AD SSO in the Azure portal.
- **Source Attribute**: PortalID, value provided by FCM 1. In the **SAML Signing Certificate** section, use the edit option to select or enter the following settings, and then select **Save**:
- - **Signing Option**: Sign SAML response
+ - **Signing Option**: Sign SAML response and Assertion
- **Signing Algorithm**: SHA-256 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
active-directory Hashicorp Cloud Platform Hcp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hashicorp-cloud-platform-hcp-tutorial.md
Previously updated : 04/06/2023 Last updated : 04/19/2023 # Azure Active Directory SSO integration with HashiCorp Cloud Platform (HCP)
-In this article, you learn how to integrate HashiCorp Cloud Platform (HCP) with Azure Active Directory (Azure AD). HashiCorp Cloud platform hosting managed services of the developer tools created by HashiCorp, such Terraform, Vault, Boundary, and Consul. When you integrate HashiCorp Cloud Platform (HCP) with Azure AD, you can:
+In this article, you learn how to integrate HashiCorp Cloud Platform (HCP) with Azure Active Directory (Azure AD). HashiCorp Cloud Platform hosting managed services of the developer tools created by HashiCorp, such Terraform, Vault, Boundary, and Consul. When you integrate HashiCorp Cloud Platform (HCP) with Azure AD, you can:
* Control in Azure AD who has access to HashiCorp Cloud Platform (HCP). * Enable your users to be automatically signed-in to HashiCorp Cloud Platform (HCP) with their Azure AD accounts.
To integrate Azure Active Directory with HashiCorp Cloud Platform (HCP), you nee
* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* HashiCorp Cloud Platform (HCP) single sign-on (SSO) enabled subscription.
+* HashiCorp Cloud Platform (HCP) single sign-on (SSO) enabled organization.
## Add application and assign a test user
Complete the following steps to enable Azure AD single sign-on in the Azure port
`https://portal.cloud.hashicorp.com/sign-in?conn-id=HCP-SSO-<HCP_ORG_ID>-samlp` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. These values are also pregenerated for you on the "Setup SAML SSO" page within your Organization settings in HashiCorp Cloud Platform (HCP). For more information SAML documentation is provided on [HashiCorp's Developer site](https://developer.hashicorp.com/hcp/docs/hcp/security/sso/sso-aad). Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) for any questions about this process. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure HashiCorp Cloud Platform (HCP) SSO
-To configure single sign-on on **HashiCorp Cloud Platform (HCP)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [HashiCorp Cloud Platform (HCP) support team](mailto:support@hashicorp.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create HashiCorp Cloud Platform (HCP) test user
-
-In this section, you create a user called Britta Simon at HashiCorp Cloud Platform (HCP). Work with [HashiCorp Cloud Platform (HCP) support team](mailto:support@hashicorp.com) to add the users in the HashiCorp Cloud Platform (HCP) platform. Users must be created and activated before you use single sign-on.
+To configure single sign-on on the **HashiCorp Cloud Platform (HCP)** side, you need to add a verification record TXT to your domain host, add the downloaded **Certificate (Base64)** and **Login URL** copied from Azure portal to your HashiCorp Cloud Platform (HCP) Organization "Setup SAML SSO" page. Please refer to the SAML documentation that is provided on [HashiCorp's Developer site](https://developer.hashicorp.com/hcp/docs/hcp/security/sso/sso-aad). Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) for any questions about this process.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to HashiCorp Cloud Platform (HCP) Sign-on URL where you can initiate the login flow.
-
-* Go to HashiCorp Cloud Platform (HCP) Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you select the HashiCorp Cloud Platform (HCP) tile in the My Apps, this will redirect to HashiCorp Cloud Platform (HCP) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+In the previous [Create and assign Azure AD test user](#create-and-assign-azure-ad-test-user) section, you created a user called B.Simon and assigned it to the HashiCorp Cloud Platform (HCP) app within the Azure Portal. This can now be used for testing the SSO connection. You may also use any account that is already associated with the HashiCorp Cloud Platform (HCP) app in the Azure Portal.
## Additional resources * [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) * [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+* [HashiCorp Cloud Platform (HCP) | Azure Active Directory SAML SSO Configuration](https://developer.hashicorp.com/hcp/docs/hcp/security/sso/sso-aad).
## Next steps
active-directory Hornbill Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hornbill-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/19/2023 # Tutorial: Azure AD SSO integration with Hornbill
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, log in to Hornbill as a Security Administrator.
-2. On the Home page, click **System**.
+2. On the Home page, click the **Configuration** settings icon at the bottom left of the page.
- ![Screenshot shows the Hornbill system.](./media/hornbill-tutorial/system.png "Hornbill system")
+ ![Screenshot shows the Hornbill system.](./media/hornbill-tutorial/settings.png "Hornbill system")
-3. Navigate to **Security**.
+3. Navigate to **Platform Configuration**.
- ![Screenshot shows the Hornbill security.](./media/hornbill-tutorial/security.png "Hornbill security")
+ ![Screenshot shows the Hornbill platform configuration.](./media/hornbill-tutorial/platform-configuration.png "Hornbill security")
-4. Click **SSO Profiles**.
+4. Click **SSO Profiles** under Security.
- ![Screenshot shows the Hornbill single.](./media/hornbill-tutorial/profile.png "Hornbill single")
+ ![Screenshot shows the Hornbill single.](./media/hornbill-tutorial/profiles.png "Hornbill single")
-5. On the right side of the page, click on **Add logo**.
+5. On the right side of the page, click on **+ Create New Profile**.
- ![Screenshot shows to add the logo.](./media/hornbill-tutorial/add-logo.png "Hornbill add")
+ ![Screenshot shows to add the logo.](./media/hornbill-tutorial/create-new-profile.png "Hornbill create")
-6. On the **Profile Details** bar, click on **Import SAML Meta logo**.
+6. On the **Profile Details** bar, click on the **Import IDP Meta Data** button.
- ![Screenshot shows Hornbill Meta logo.](./media/hornbill-tutorial/logo.png "Hornbill logo")
+ ![Screenshot shows Hornbill Meta logo.](./media/hornbill-tutorial/import-metadata.png "Hornbill logo")
-7. On the Pop-up page in the **URL** text box, paste the **App Federation Metadata Url**, which you have copied from Azure portal and click **Process**.
+7. On the pop-up, in the **URL** text box, paste the **App Federation Metadata Url**, which you have copied from Azure portal and click **Process**.
- ![Screenshot shows Hornbill process.](./media/hornbill-tutorial/process.png "Hornbill process")
+ ![Screenshot shows Hornbill process.](./media/hornbill-tutorial/metadata-url.png "Hornbill process")
8. After clicking process the values get auto populated automatically under **Profile Details** section.
- ![Screenshot shows Hornbill profile](./media/hornbill-tutorial/page.png "Hornbill profile")
-
- ![Screenshot shows Hornbill details.](./media/hornbill-tutorial/services.png "Hornbill details")
-
- ![Screenshot shows Hornbill certificate.](./media/hornbill-tutorial/details.png "Hornbill certificate")
+ ![Screenshot shows Hornbill profile](./media/hornbill-tutorial/profile-details.png "Hornbill profile")
9. Click **Save Changes**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
In this section, a user called Britta Simon is created in Hornbill. Hornbill supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Hornbill, a new one is created after authentication. > [!Note]
-> If you need to create a user manually, contact [Hornbill Client support team](https://www.hornbill.com/support/?request/).
+> If you need to create a user manually, contact [Hornbill Client support team](https://www.hornbill.com/support/?request/).
## Test SSO
active-directory Hubspot Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hubspot-tutorial.md
To provision a user account in HubSpot:
![The Create user option in HubSpot](./media/hubspot-tutorial/teams.png)
-1. In the **Add email addess(es)** box, enter the email address of the user in the format brittasimon\@contoso.com, and then select **Next**.
+1. In the **Add email address(es)** box, enter the email address of the user in the format brittasimon\@contoso.com, and then select **Next**.
![The Add email address(es) box in the Create users section in HubSpot](./media/hubspot-tutorial/add-user.png)
active-directory Predict360 Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predict360-sso-tutorial.md
Previously updated : 04/06/2023 Last updated : 04/20/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
- d. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+ d. Enter the customer code/key provided by 360factors in **Relay State** textbox. Make sure the code is entered in lowercase. This is required for **IDP** initiated mode.
+
+ > [!Note]
+ > You will get the **Service Provider metadata file** from the [Predict360 SSO support team](mailto:support@360factors.com). If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+
+ e. If you wish to configure the application in **SP** initiated mode, then perform the following step:
- In the **Sign on URL** textbox, type the URL:
- `https://paadt.360factors.com/predict360/login.do`.
+ In the **Sign on URL** textbox, type your customer specific URL using the following pattern:
+ `https://<customer-key>.360factors.com/predict360/login.do`
> [!Note]
- > You will get the **Service Provider metadata file** from the [Predict360 SSO support team](mailto:support@360factors.com). If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+ > This URL is shared by 360factors team. `<customer-key>` is replaced with your customer key, which is also provide by 360factors team.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+1. Find **Certificate (Raw)** in the **SAML Signing Certificate** section, and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate Raw download link.](common/certificateraw.png " Raw Certificate")
+ 1. On the **Set up Predict360 SSO** section, copy the appropriate URL(s) based on your requirement. ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
-## Configure Predict360 SSO SSO
+## Configure Predict360 SSO
-To configure single sign-on on **Predict360 SSO** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Predict360 SSO support team](mailto:support@360factors.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Predict360 SSO** side, you need to send the downloaded **Federation Metadata XML**, **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Predict360 SSO support team](mailto:support@360factors.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Predict360 SSO test user
active-directory Workday Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/18/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing Edit Basic SAML Configuration.](common/edit-urls.png)
1. On the **Basic SAML Configuration** page, enter the values for the following fields:
Follow these steps to enable Azure AD SSO in the Azure portal.
> These values are not the real. Update these values with the actual Sign-on URL, Reply URL and Logout URL. Your reply URL must have a subdomain for example: www, wd2, wd3, wd3-impl, wd5, wd5-impl). > Using something like `http://www.myworkday.com` works but `http://myworkday.com` does not. Contact [Workday Client support team](https://www.workday.com/en-us/partners-services/services/support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Your Workday application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Workday application expects **nameidentifier** to be mapped with **user.mail**, **UPN**, etc., so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+1. Your Workday application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, whereas **nameidentifier** is mapped with **user.userprincipalname**. Workday application expects **nameidentifier** to be mapped with **user.mail**, **UPN**, etc., so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot showing The Certificate download link.](common/metadataxml.png)
1. To modify the **Signing** options as per your requirement, click **Edit** button to open **SAML Signing Certificate** dialog.
- ![Certificate](common/edit-certificate.png)
-
- ![SAML Signing Certificate](./media/workday-tutorial/signing-option.png)
+ ![Screenshot showing Certificate.](common/edit-certificate.png)
a. Select **Sign SAML response and assertion** for **Signing Option**.
+ ![Screenshot showing SAML Signing Certificate.](./media/workday-tutorial/signing-option.png)
+ b. Click **Save** 1. On the **Set up Workday** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot showing Copy configuration URLs.](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been setup for this app, you see "Default Access" role selected.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Workday
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Search box**, search with the name **Edit Tenant Setup ΓÇô Security** on the top left side of the home page.
- ![Edit Tenant Security](./media/workday-tutorial/search-box.png "Edit Tenant Security")
+ ![Screenshot showing Edit Tenant Security.](./media/workday-tutorial/search-box.png "Edit Tenant Security")
1. In the **SAML Setup** section, click on **Import Identity Provider**.
- ![SAML Setup](./media/workday-tutorial/saml-setup.png "SAML Setup")
+ ![Screenshot showing SAML Setup.](./media/workday-tutorial/saml-setup.png "SAML Setup")
1. In **Import Identity Provider** section, perform the below steps:
- ![Importing Identity Provider](./media/workday-tutorial/import-identity-provider.png)
+ ![Screenshot showing Importing Identity Provider.](./media/workday-tutorial/import-identity-provider.png)
a. Give the **Identity Provider Name** like `AzureAD` in the textbox.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Click on **Select files** to upload the downloaded **Federation Metadata XML** file.
- d. Click on **OK** and then **Done**.
+ d. Click on **OK**.
-1. After clicking **Done**, a new row will be added in the **SAML Identity Providers** and then you can add the below steps for the newly created row.
+1. After clicking **OK**, a new row will be added in the **SAML Identity Providers** and then you can add the below steps for the newly created row.
- ![SAML Identity Providers.](./media/workday-tutorial/saml-identity-providers.png "SAML Identity Providers")
+ ![Screenshot showing SAML Identity Providers.](./media/workday-tutorial/saml-identity-providers.png "SAML Identity Providers")
a. Click on **Enable IDP Initiated Logout** checkbox.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the **Service Provider ID** textbox, type **http://www.workday.com**.
- g Select **Do Not Deflate SP-initiated Authentication Request**.
-
-1. Perform the following steps in the below image.
-
- ![Workday](./media/workday-tutorial/service-provider.png "SAML Identity Providers")
-
- a. In the **Service Provider ID (Will be Deprecated)** textbox, type **http://www.workday.com**.
-
- b. In the **IDP SSO Service URL (Will be Deprecated)** textbox, type **Login URL** value.
-
- c. Select **Do Not Deflate SP-initiated Authentication Request (Will be Deprecated)**.
+ g. Select **Do Not Deflate SP-initiated Authentication Request**.
- d. For **Authentication Request Signature Method**, select **SHA256**.
+ h. Click **Ok**.
- e. Click **OK**.
+ i. If the task was completed successfully, click **Done**.
> [!NOTE] > Please ensure you set up single sign-on correctly. In case you enable single sign-on with incorrect setup, you may not be able to enter the application with your credentials and get locked out. In this situation, Workday provides a backup log-in URL where users can sign-in using their normal username and password in the following format:[Your Workday URL]/login.flex?redirect=n
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Directory** page, select **Find Workers** in view tab.
- ![Find workers](./media/workday-tutorial/user-directory.png)
+ ![Screenshot showing Find workers.](./media/workday-tutorial/user-directory.png)
1. In the **Find Workers** page, select the user from the results. 1. In the following page,select **Job > Worker Security** and the **Workday account** has to match with the Azure active directory as the **Name ID** value.
- ![Worker Security](./media/workday-tutorial/worker-security.png)
+ ![Screenshot showing Worker Security.](./media/workday-tutorial/worker-security.png)
> [!NOTE] > For more information on how to create a workday test user, please contact [Workday Client support team](https://www.workday.com/en-us/partners-services/services/support.html).
active-directory Linkedin Employment Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/linkedin-employment-verification.md
+
+ Title: LinkedIn employment verification
+description: A design pattern describing how to configure employment verification using LinkedIn
++++++ Last updated : 04/21/2023+++
+# LinkedIn employment verification
+
+If your organization wants its employees get verified on LinkedIn, you need to follow these few steps:
+
+1. Setup your Microsoft Entra Verified ID service by following these [instructions](verifiable-credentials-configure-tenant.md).
+1. [Create](how-to-use-quickstart-verifiedemployee.md#create-a-verified-employee-credential) a Verified ID Employee credential.
+1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp.
+1. Once you deploy the updated LinkedIn mobile app your employees can get verified.
+
+>[!NOTE]
+> Review LinkedIn's documentation for information on [verifications on LinkedIn profiles.](https://www.linkedin.com/help/linkedin/answer/a1359065).
+
+## Deploying custom Webapp
+
+Deploying this custom webapp from [GitHub](https://github.com/Azure-Samples/VerifiedEmployeeIssuance) allows an administrator to have control over who can get verified and change which information is shared with LinkedIn.
+There are two reasons to deploy the custom webapp for LinkedIn Employment verification.
+
+1. You need control over who can get verified on LinkedIn. The webapp allows you to use user assignments to grant access.
+1. You want more control over the issuance of the Verified Employee ID. By default, the Employee Verified ID contains a few claims:
+
+ - ```firstname```
+ - ```lastname```
+ - ```displayname```
+ - ```jobtitle```
+ - ```upn```
+ - ```email```
+ - ```photo```
+
+>[!NOTE]
+>The web app can be modified to remove claims, for example, you may choose to remove the photo claim.
+
+Installation instructions for the Webapp can be found in the [GitHub repository](https://github.com/Azure-Samples/VerifiedEmployeeIssuance/blob/main/ReadmeFiles/Deployment.md)
+
+## Architecture overview
+
+Once the administrator configures the company page on LinkedIn, employees can get verified. Below are the high-level steps for LinkedIn integration:
+
+1. User starts the LinkedIn mobile app.
+1. The mobile app retrieves information from the LinkedIn backend and checks if the company is enabled for employment verification and it retrieves a URL to the custom Webapp.
+1. If the company is enabled, the user can tap on the verify employment link, and the user is sent to the Webapp in a web view.
+1. The user needs to provide their corporate credentials to sign in.
+1. The Webapp retrieves the user profile from Microsoft Graph including, ```firstname```, ```lastname```, ```displayname```, ```jobtitle```, ```upn```, ```email``` and ```photo``` and call the Microsoft Entra Verified ID service with the profile information.
+1. The Microsoft Entra Verified ID service creates a verifiable credentials issuance request and returns the URL of that specific request.
+1. The Webapp redirects back to the LinkedIn app with this specific URL.
+1. LinkedIn app wallet communicates with the Microsoft Entra Verified ID services to get the Verified Employment VC issued in their wallet, which is part of the LinkedIn mobile app.
+1. The LinkedIn app then verifies the received verifiable credential.
+1. If the verification is completed, they change the status to ΓÇÿverifiedΓÇÖ in their backend system and is visible to other users of LinkedIn.
+
+The diagram below shows the dataflow of the entire solution.
+
+ ![Diagram showing a high-level flow.](media/linkedin-employment-verification/linkedin-employee-verification.png)
++
+## Frequently asked questions
+
+### Can I use Microsoft Authenticator to store my Employee Verified ID and use it to get verified on LinkedIn?
+
+Currently the solution works through the embedded webview. In the future LinkedIn allows us to use Microsoft authenticator or any compatible custom wallet to verify employment. The myaccount page will also be updated to allow issuance of the verified employee ID to Microsoft Authenticator.
+
+### How do users sign-in?
+
+The Webapp is protected using Microsoft Entra Azure Active directory. Users sign-in according to the administrator's policy, either with passwordless, regular username and password, with or without MFA, etc. This is proof a user is allowed to get issued a verified employee ID.
+
+### What happens when an employee leaves the organization?
+
+Nothing by default. You can choose the revoke the Verified Employee ID but currently LinkedIn isn't checking for that status.
+
+### What happens when my Verified Employee ID expires?
+
+LinkedIn asks you again to get verified, if you donΓÇÖt, the verified checkmark is removed from your profile.
+
+### Can former employees use this feature to get verified?
+
+Currently this option only verifies current employment.
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Learn what's new in the service. These items may be release notes, videos, blog
Customers can now improve the relevance of recommendations to make them more actionable, resulting in additional cost savings. The right sizing recommendations help optimize costs by identifying idle or underutilized virtual machines based on their CPU, memory, and network activity over the default lookback period of seven days.
-Now, with this latest update, customers can adjust the default look back period to get recommendations based on 14, 21,30, 60, or even 90 days of use. The configuration can be applied at the subscription level. This is especially useful when the workloads have biweekly or monthly peaks (such as with payroll applications).
+Now, with this latest update, customers can adjust the default look back period to get recommendations based on 14, 21, 30, 60, or even 90 days of use. The configuration can be applied at the subscription level. This is especially useful when the workloads have biweekly or monthly peaks (such as with payroll applications).
To learn more, visit [Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances).
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/12/2023 Last updated : 04/19/2023 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to in-tree driver features, Azure Disk CSI driver supports the follo
> [!NOTE] > Depending on the VM SKU that's being used, the Azure Disk CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
-## Storage class driver dynamic disks parameters
-
-|Name | Meaning | Available Value | Mandatory | Default value
-| | | | |
-|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS`, `PremiumV2_LRS` (`PremiumV2_LRS` only supports `None` caching mode) | No | `StandardSSD_LRS`|
-|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|
-|cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
-|location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
-|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster|
-|DiskIOPSReadWrite | [UltraSSD disk](../virtual-machines/linux/disks-ultra-ssd.md) IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`|
-|DiskMBpsReadWrite | [UltraSSD disk](../virtual-machines/linux/disks-ultra-ssd.md) Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
-|LogicalSectorSize | Logical sector size in bytes for Ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`|
-|tags | Azure Disk [tags](../azure-resource-manager/management/tag-resources.md) | Tag format: `key1=val1,key2=val2` | No | ""|
-|diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest](../virtual-machines/windows/disk-encryption.md) | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
-|diskEncryptionType | Encryption type of the disk encryption set. | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
-|writeAcceleratorEnabled | [Write Accelerator on Azure Disks](../virtual-machines/windows/how-to-enable-write-accelerator.md) | `true`, `false` | No | ""|
-|networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
-|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
-|enableBursting | [Enable on-demand bursting](../virtual-machines/disk-bursting.md) beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`|
-|useragent | User agent used for [customer usage attribution](../marketplace/azure-partner-customer-usage-attribution.md)| | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`|
-|enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
-|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
- ## Use CSI persistent volumes with Azure Disks A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disk for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure Disks](azure-csi-disk-storage-provision.md#statically-provision-a-volume).
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/11/2023 Last updated : 04/19/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to the original in-tree driver features, Azure File CSI driver suppo
- [Private endpoint][private-endpoint-overview] - Creating large mount of file shares in parallel.
-## Storage class driver dynamic parameters
-
-|Name | Meaning | Available Value | Mandatory | Default value
-| | | | |
-|skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GiB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
-|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
-|resourceGroup | Specify the resource group where the Azure Disks will be created. | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
-|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
-|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
-|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. |
-|shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.|
-|server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
-|disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` |
-|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` |
-|requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` |
-|networkEndpointType | Specify network endpoint type for the storage account created by driver. If `privateEndpoint` is specified, a private endpoint will be created for the storage account. For other cases, a service endpoint will be created by default. | "",`privateEndpoint`| No | "" |
-|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. |
-|tags | [tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
-|matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` |
-| | **Following parameters are only for SMB protocol** | | |
-|subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
-|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
-|secretName | Specify secret name to store account key. | | No |
-|secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` |
-|useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
-| | **Following parameters are only for NFS protocol** | | |
-|rootSquashType | Specify root squashing behavior on the share. The default is `NoRootSquash` | `AllSquash`, `NoRootSquash`, `RootSquash` | No |
-|mountPermissions | Mounted folder permissions. The default is `0777`. If set to `0`, driver doesn't perform `chmod` after mount | `0777` | No |
-| | **Following parameters are only for vnet setting, e.g. NFS, private endpoint** | | |
-|vnetResourceGroup | Specify Vnet resource group where virtual network is defined. | Existing resource group name. | No | If empty, driver uses the `vnetResourceGroup` value in Azure cloud config file. |
-|vnetName | Virtual network name | Existing virtual network name. | No | If empty, driver uses the `vnetName` value in Azure cloud config file. |
-|subnetName | Subnet name | Existing subnet name of the agent node. | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
-|fsGroupChangePolicy | Indicates how volume's ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch`|
- ## Use a persistent volume with Azure Files A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][statically-provision-a-volume].
provisioner: file.csi.azure.com
allowVolumeExpansion: true parameters: protocol: nfs
+mountOptions:
+ - nconnect=4
``` After editing and saving the file, create the storage class with the [kubectl apply][kubectl-apply] command:
aks Cis Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-ubuntu.md
Title: Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark description: Learn how AKS applies the CIS benchmark Previously updated : 04/20/2022 Last updated : 04/19/2023++ # Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark
-As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security OS configuration applied to Ubuntu imaged used by AKS. This security configuration is based on the Azure Linux security baseline which aligns with CIS benchmark. For more information about AKS security, see Security concepts for applications and clusters in Azure Kubernetes Service (AKS). For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md). For more information on the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks]. For more information on the Azure security baselines for Linux, see [Linux security baseline][linux-security-baseline].
+As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security OS configuration applied to Ubuntu imaged used by AKS. This security configuration is based on the Azure Linux security baseline, which aligns with CIS benchmark. For more information about AKS security, see Security concepts for applications and clusters in Azure Kubernetes Service (AKS). For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md). For more information on the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks]. For more information on the Azure security baselines for Linux, see [Linux security baseline][linux-security-baseline].
## Ubuntu LTS 18.04
The following are the results from the [CIS Ubuntu 18.04 LTS Benchmark v2.1.0][c
Recommendations can have one of the following reasons:
-* *Potential Operation Impact* - Recommendation was not applied because it would have a negative effect on the service.
+* *Potential Operation Impact* - Recommendation wasn't applied because it would have a negative effect on the service.
* *Covered Elsewhere* - Recommendation is covered by another control in Azure cloud compute. The following are CIS rules implemented:
The following are CIS rules implemented:
| 1.3.1 | Ensure AIDE is installed | Fail | Covered Elsewhere | | 1.3.2 | Ensure filesystem integrity is regularly checked | Fail | Covered Elsewhere | | 1.4 | Secure Boot Settings |||
-| 1.4.1 | Ensure permissions on bootloader config are not overridden | Fail | |
+| 1.4.1 | Ensure permissions on bootloader config aren't overridden | Fail | |
| 1.4.2 | Ensure bootloader password is set | Fail | Not Applicable| | 1.4.3 | Ensure permissions on bootloader config are configured | Fail | | | 1.4.4 | Ensure authentication required for single user mode | Fail | Not Applicable |
The following are CIS rules implemented:
| 1.8 | GNOME Display Manager ||| | 1.8.2 | Ensure GDM login banner is configured | Pass || | 1.8.3 | Ensure disable-user-list is enabled | Pass ||
-| 1.8.4 | Ensure XDCMP is not enabled | Pass ||
+| 1.8.4 | Ensure XDCMP isn't enabled | Pass ||
| 1.9 | Ensure updates, patches, and additional security software are installed | Pass || | 2 | Services ||| | 2.1 | Special Purpose Services |||
The following are CIS rules implemented:
| 2.1.1.2 | Ensure systemd-timesyncd is configured | Not Applicable | AKS uses ntpd for timesync | | 2.1.1.3 | Ensure chrony is configured | Fail | Covered Elsewhere | | 2.1.1.4 | Ensure ntp is configured | Pass ||
-| 2.1.2 | Ensure X Window System is not installed | Pass ||
-| 2.1.3 | Ensure Avahi Server is not installed | Pass ||
-| 2.1.4 | Ensure CUPS is not installed | Pass ||
-| 2.1.5 | Ensure DHCP Server is not installed | Pass ||
-| 2.1.6 | Ensure LDAP server is not installed | Pass ||
-| 2.1.7 | Ensure NFS is not installed | Pass ||
-| 2.1.8 | Ensure DNS Server is not installed | Pass ||
-| 2.1.9 | Ensure FTP Server is not installed | Pass ||
-| 2.1.10 | Ensure HTTP server is not installed | Pass ||
-| 2.1.11 | Ensure IMAP and POP3 server are not installed | Pass ||
-| 2.1.12 | Ensure Samba is not installed | Pass ||
-| 2.1.13 | Ensure HTTP Proxy Server is not installed | Pass ||
-| 2.1.14 | Ensure SNMP Server is not installed | Pass ||
+| 2.1.2 | Ensure X Window System isn't installed | Pass ||
+| 2.1.3 | Ensure Avahi Server isn't installed | Pass ||
+| 2.1.4 | Ensure CUPS isn't installed | Pass ||
+| 2.1.5 | Ensure DHCP Server isn't installed | Pass ||
+| 2.1.6 | Ensure LDAP server isn't installed | Pass ||
+| 2.1.7 | Ensure NFS isn't installed | Pass ||
+| 2.1.8 | Ensure DNS Server isn't installed | Pass ||
+| 2.1.9 | Ensure FTP Server isn't installed | Pass ||
+| 2.1.10 | Ensure HTTP server isn't installed | Pass ||
+| 2.1.11 | Ensure IMAP and POP3 server aren't installed | Pass ||
+| 2.1.12 | Ensure Samba isn't installed | Pass ||
+| 2.1.13 | Ensure HTTP Proxy Server isn't installed | Pass ||
+| 2.1.14 | Ensure SNMP Server isn't installed | Pass ||
| 2.1.15 | Ensure mail transfer agent is configured for local-only mode | Pass ||
-| 2.1.16 | Ensure rsync service is not installed | Fail | |
-| 2.1.17 | Ensure NIS Server is not installed | Pass ||
+| 2.1.16 | Ensure rsync service isn't installed | Fail | |
+| 2.1.17 | Ensure NIS Server isn't installed | Pass ||
| 2.2 | Service Clients |||
-| 2.2.1 | Ensure NIS Client is not installed | Pass ||
-| 2.2.2 | Ensure rsh client is not installed | Pass ||
-| 2.2.3 | Ensure talk client is not installed | Pass ||
-| 2.2.4 | Ensure telnet client is not installed | Fail | |
-| 2.2.5 | Ensure LDAP client is not installed | Pass ||
-| 2.2.6 | Ensure RPC is not installed | Fail | Potential Operational Impact |
+| 2.2.1 | Ensure NIS Client isn't installed | Pass ||
+| 2.2.2 | Ensure rsh client isn't installed | Pass ||
+| 2.2.3 | Ensure talk client isn't installed | Pass ||
+| 2.2.4 | Ensure telnet client isn't installed | Fail | |
+| 2.2.5 | Ensure LDAP client isn't installed | Pass ||
+| 2.2.6 | Ensure RPC isn't installed | Fail | Potential Operational Impact |
| 2.3 | Ensure nonessential services are removed or masked | Pass | | | 3 | Network Configuration ||| | 3.1 | Disable unused network protocols and devices |||
The following are CIS rules implemented:
| 3.2.1 | Ensure packet redirect sending is disabled | Pass || | 3.2.2 | Ensure IP forwarding is disabled | Fail | Not Applicable | | 3.3 | Network Parameters (Host and Router) |||
-| 3.3.1 | Ensure source routed packets are not accepted | Pass ||
-| 3.3.2 | Ensure ICMP redirects are not accepted | Pass ||
-| 3.3.3 | Ensure secure ICMP redirects are not accepted | Pass ||
+| 3.3.1 | Ensure source routed packets aren't accepted | Pass ||
+| 3.3.2 | Ensure ICMP redirects aren't accepted | Pass ||
+| 3.3.3 | Ensure secure ICMP redirects aren't accepted | Pass ||
| 3.3.4 | Ensure suspicious packets are logged | Pass || | 3.3.5 | Ensure broadcast ICMP requests are ignored | Pass || | 3.3.6 | Ensure bogus ICMP responses are ignored | Pass || | 3.3.7 | Ensure Reverse Path Filtering is enabled | Pass || | 3.3.8 | Ensure TCP SYN Cookies is enabled | Pass ||
-| 3.3.9 | Ensure IPv6 router advertisements are not accepted | Pass ||
+| 3.3.9 | Ensure IPv6 router advertisements aren't accepted | Pass ||
| 3.4 | Uncommon Network Protocols ||| | 3.5 | Firewall Configuration ||| | 3.5.1 | Configure UncomplicatedFirewall |||
The following are CIS rules implemented:
| 6.1.14 | Audit SGID executables | Not Applicable | | | 6.2 | User and Group Settings ||| | 6.2.1 | Ensure accounts in /etc/passwd use shadowed passwords | Pass ||
-| 6.2.2 | Ensure password fields are not empty | Pass ||
+| 6.2.2 | Ensure password fields aren't empty | Pass ||
| 6.2.3 | Ensure all groups in /etc/passwd exist in /etc/group | Pass || | 6.2.4 | Ensure all users' home directories exist | Pass || | 6.2.5 | Ensure users own their home directories | Pass || | 6.2.6 | Ensure users' home directories permissions are 750 or more restrictive | Pass ||
-| 6.2.7 | Ensure users' dot files are not group or world writable | Pass ||
+| 6.2.7 | Ensure users' dot files aren't group or world writable | Pass ||
| 6.2.8 | Ensure no users have .netrc files | Pass || | 6.2.9 | Ensure no users have .forward files | Pass || | 6.2.10 | Ensure no users have .rhosts files | Pass ||
For more information about AKS security, see the following articles:
[cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark [cis-benchmark-aks]: https://www.cisecurity.org/benchmark/kubernetes/ [cis-benchmark-ubuntu]: https://www.cisecurity.org/benchmark/ubuntu/
-[linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md
+[linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
+ Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support
+ description: Learn how to configure Azure CNI (advanced) networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)++++ Previously updated : 01/09/2023 Last updated : 04/20/2023
It offers the following benefits:
* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node. * **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
-* **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
-* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a VNet Network NAT, and using NSGs to filter traffic between node pools.
+* **High performance**: Since pod are assigned virtual network IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
+* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using an Azure NAT Gateway, and using NSGs to filter traffic between node pools.
* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution. This article shows you how to use Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS.
az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepoo
--no-wait ```
+## Monitor IP subnet usage
+
+Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below:
+
+### Get the YAML file
+
+1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
+
+2. Find **`azure_subnet_ip_usage`** in integrations. Set `enabled` to `true`.
+
+3. Save the file.
+
+### Get the AKS credentials
+
+Set the variables for subscription, resource group and cluster. Consider the following as examples:
+
+```azurecli
+
+ $s="subscriptionId"
+
+ $rg="resourceGroup"
+
+ $c="ClusterName"
+
+ az account set -s $s
+
+ az aks get-credentials -n $c -g $rg
+
+```
+
+### Apply the config
+
+1. Open terminal in the folder the downloaded **container-azm-ms-agentconfig.yaml** file is saved.
+
+2. First, apply the config using the command: `kubectl apply -f container-azm-ms-agentconfig.yaml`
+
+3. This will restart the pod and after 5-10 minutes, the metrics will be visible.
+
+4. To view the metrics on the cluster, go to Workbooks on the cluster page in the Azure portal, and find the workbook named "Subnet IP Usage". Your view will look similar to the following:
+
+ :::image type="content" source="media/configure-azure-cni-dynamic-ip-allocation/ip-subnet-usage.png" alt-text="A diagram of the Azure portal's workbook blade is shown, and metrics for an AKS cluster's subnet IP usage are displayed.":::
+ ## Dynamic allocation of IP addresses and enhanced subnet support FAQs * **Can I assign multiple pod subnets to a cluster/node pool?**
Learn more about networking in AKS in the following articles:
* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls] * [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+<!-- LINKS - External -->
+[github]: https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml
+ <!-- LINKS - Internal --> [aks-ingress-basic]: ingress-basic.md [aks-ingress-tls]: ingress-tls.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
description: Learn how to configure Azure CNI (advanced) networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. + Previously updated : 05/16/2022 Last updated : 04/20/2023
The following screenshot from the Azure portal shows an example of configuring t
:::image type="content" source="../aks/media/networking-overview/portal-01-networking-advanced.png" alt-text="Screenshot from the Azure portal showing an example of configuring these settings during AKS cluster creation.":::
-## Monitor IP subnet usage
-
-Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below:
-
-### Get the YAML file
-
-1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
-2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`.
-3. Save the file.
-
-### Get the AKS credentials
-
-Set the variables for subscription, resource group and cluster. Consider the following as examples:
-
-```azurepowershell
-
- $s="subscriptionId"
-
- $rg="resourceGroup"
-
- $c="ClusterName"
-
- az account set -s $s
-
- az aks get-credentials -n $c -g $rg
-
-```
-
-### Apply the config
-
-1. Open terminal in the folder the downloaded container-azm-ms-agentconfig.yaml file is saved.
-2. First, apply the config using the command: `kubectl apply -f container-azm-ms-agentconfig.yaml`
-3. This will restart the pod and after 5-10 minutes, the metrics will be visible.
-4. To view the metrics on the cluster, go to Workbooks on the cluster page in the Azure portal, and find the workbook named "Subnet IP Usage". Your view will look similar to the following:
-
- :::image type="content" source="media/Azure-cni/ip-subnet-usage.png" alt-text="A diagram of the Azure portal's workbook blade is shown, and metrics for an AKS cluster's subnet IP usage are displayed.":::
- ## Frequently asked questions * **Can I deploy VMs in my cluster subnet?**
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
Learn more about how KEDA works in the [official KEDA documentation][keda-archit
## Installation and version - KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm] or [Azure CLI][keda-cli]. The KEDA add-on provides a fully supported installation of KEDA that is integrated with AKS.
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
The Kubernetes Event-driven Autoscaling (KEDA) add-on integrates with features provided by Azure and open source projects. - [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!IMPORTANT]
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Last updated 3/23/2023
Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
-AKS now supports aborting a long running operation, which is now generally available. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
+AKS support for aborting long running operations is now generally available. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
The abort operation supports the following scenarios:
When you terminate an operation, it doesn't roll back to the previous state and
## Next steps
-Learn more about [Container insights](../azure-monitor/containers/container-insights-overview.md) to understand how it helps you monitor the performance and health of your Kubernetes cluster and container workloads.
+Learn more about [Container insights](../azure-monitor/containers/container-insights-overview.md) to understand how it helps you monitor the performance and health of your Kubernetes cluster and container workloads.
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
Title: Handle Linux node reboots with kured
description: Learn how to update Linux nodes and automatically reboot them with kured in Azure Kubernetes Service (AKS) Previously updated : 02/28/2019+ Last updated : 04/19/2023 #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
You need the Azure CLI version 2.0.59 or later installed and configured. Run `az
## Understand the AKS node update experience
-In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
+In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they're automatically downloaded and installed.
![AKS node update and reboot process with kured](media/node-updates-kured/node-reboot-process.png)
You can use your own workflows and processes to handle node reboots, or use `kur
### Node image upgrades
-Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every day but will remain unpatched until all checks and restarts are complete.
+Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic check every day but remains unpatched until all checks and restarts are complete.
-Alternatively, you can use node image upgrade to check for and update node images used by your cluster. For more details on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
+Alternatively, you can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
### Node upgrades
-There is an additional process in AKS that lets you *upgrade* a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates. An AKS upgrade performs the following actions:
+There's another process in AKS that lets you *upgrade* a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates. An AKS upgrade performs the following actions:
* A new node is deployed with the latest security updates and Kubernetes version applied. * An old node is cordoned and drained.
kubectl create namespace kured
helm install my-release kubereboot/kured --namespace kured --set controller.nodeSelector."kubernetes\.io/os"=linux ```
-You can also configure additional parameters for `kured`, such as integration with Prometheus or Slack. For more information about additional configuration parameters, see the [kured Helm chart][kured-install].
+You can also configure extra parameters for `kured`, such as integration with Prometheus or Slack. For more information about configuration parameters, see the [kured Helm chart][kured-install].
## Update cluster nodes
If updates were applied that require a node reboot, a file is written to */var/r
## Monitor and review reboot process
-When one of the replicas in the DaemonSet has detected that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents additional pods being scheduled on the node. The lock also indicates that only one node should be rebooted at a time. With the node cordoned off, running pods are drained from the node, and the node is rebooted.
+When one of the replicas in the DaemonSet has detected that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents more pods from being scheduled on the node. The lock also indicates that only one node should be rebooted at a time. With the node cordoned off, running pods are drained from the node, and the node is rebooted.
You can monitor the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command. The following example output shows a node with a status of *SchedulingDisabled* as the node prepares for the reboot process:
NAME STATUS ROLES AGE VERSIO
aks-nodepool1-28993262-0 Ready,SchedulingDisabled agent 1h v1.11.7 ```
-Once the update process is complete, you can view the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command with the `--output wide` parameter. This additional output lets you see a difference in *KERNEL-VERSION* of the underlying nodes, as shown in the following example output. The *aks-nodepool1-28993262-0* was updated in a previous step and shows kernel version *4.15.0-1039-azure*. The node *aks-nodepool1-28993262-1* that hasn't been updated shows kernel version *4.15.0-1037-azure*.
+Once the update process is complete, you can view the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command with the `--output wide` parameter. This output lets you see a difference in *KERNEL-VERSION* of the underlying nodes, as shown in the following example output. The *aks-nodepool1-28993262-0* was updated in a previous step and shows kernel version *4.15.0-1039-azure*. The node *aks-nodepool1-28993262-1* that hasn't been updated shows kernel version *4.15.0-1037-azure*.
```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
Title: Use the Mariner container host on Azure Kubernetes Service (AKS)
description: Learn how to use the Mariner container host on Azure Kubernetes Service (AKS) Previously updated : 12/08/2022 Last updated : 04/19/2023 # Use the Mariner container host on Azure Kubernetes Service (AKS)
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | | **HTTP/2** (Client-to-gateway) | ❌ | ❌ | ✔️ | | **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ✔️ |
+| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ❌ | ❌ |
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the default endpoint hostname; custom domain name is currently not supported.<br/>
The following table compares features available in the managed gateway versus th
| API | Managed (Dedicated) | Managed (Consumption) | Self-hosted | | | -- | -- | - | | [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ |
-| [WSDL specification)](import-soap-api.md) | ✔️ | ✔️ | ✔️ |
+| [WSDL specification](import-soap-api.md) | ✔️ | ✔️ | ✔️ |
| WADL specification | ✔️ | ✔️ | ✔️ | | [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ |
The following table compares features available in the managed gateway versus th
| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ | | [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ |
-| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ❌ |
+| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ❌ |
| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
+<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported in the Consumption tier.
+ ### Policies Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions.
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. - [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema.-- [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in
+- [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in responses against the API schema.
## Next steps For more information about working with policies, see:
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Previously updated : 12/16/2022 Last updated : 04/19/2023
In Azure API Management, *subscriptions* are the most common way for API consumers to access APIs published through an API Management instance. This article provides an overview of the concept.
+> [!NOTE]
+> An API Management subscription is used specifically to call APIs through API Management. It's not the same as an Azure subscription.
+ ## What are subscriptions? By publishing APIs through API Management, you can easily secure API access using subscription keys. Developers who need to consume the published APIs must include a valid subscription key in HTTP requests when calling those APIs. Without a valid subscription key, the calls are:
Each API Management instance comes with an immutable, all-APIs subscription (als
### Standalone subscriptions
-API Management also allows *standalone* subscriptions, which are not associated with a developer account. This feature proves useful in scenarios similar to several developers or teams sharing a subscription.
+API Management also allows *standalone* subscriptions, which aren't associated with a developer account. This feature proves useful in scenarios similar to several developers or teams sharing a subscription.
Creating a subscription without assigning an owner makes it a standalone subscription. To grant developers and the rest of your team access to the standalone subscription key, either: * Manually share the subscription key.
API publishers can [create subscriptions](api-management-howto-create-subscripti
When created in the portal, a subscription is in the **Active** state, meaning a subscriber can call an associated API using a valid subscription key. You can change the state of the subscription as needed - for example, you can suspend, cancel, or delete the subscription to prevent API access.
+## Use a subscription key
+
+A subscriber can use an API Management subscription key in one of two ways:
+
+* Add the **Ocp-Apim-Subscription-Key** HTTP header to the request, passing the value of a valid subscription key.
+
+* Include the **subscription-key** query parameter and a valid value in the URL. The query parameter is checked only if the header isn't present.
+
+> [!TIP]
+> **Ocp-Apim-Subscription-Key** is the default name of the subscription key header, and **subscription-key** is the default name of the query parameter. If desired, you may modify these names in the settings for each API. For example, in the portal, update these names on the **Settings** tab of an API.
+ ## Enable or disable subscription requirement for API or product access By default when you create an API, a subscription key is required for API access. Similarly, when you create a product, by default a subscription key is required to access any API that's added to the product. Under certain scenarios, an API publisher might want to publish a product or a particular API to the public without the requirement of subscriptions. While a publisher could choose to enable unsecured (anonymous) access to certain APIs, configuring another mechanism to secure client access is recommended.
api-management Metrics Retirement Aug 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/metrics-retirement-aug-2023.md
+
+ Title: Azure API Management - Metrics retirement (August 2023)
+description: Azure API Management is retiring five legacy metrics as of August 2023. If you monitor your API Management instance using these metrics, you must update your monitoring settings and alert rules to use the Requests metric.
+
+documentationcenter: ''
+++ Last updated : 04/20/2023+++
+# Metrics retirements (August 2023)
+
+Azure API Management integrates natively with Azure Monitor and emits metrics every minute, giving customers visibility into the state and health of their APIs. The following five legacy metrics have been deprecated since May 2019 and will no longer be available after 31 August 2023:
+
+* Total Gateway Requests
+* Successful Gateway Requests
+* Unauthorized Gateway Requests
+* Failed Gateway Requests
+* Other Gateway Requests
+
+To enable a more granular view of API traffic and better performance, API Management provides a replacement metric named **Requests**. The Requests metric has dimensions that can be used for filtering to replace the legacy metrics and also support more monitoring scenarios.
+
+From now through 31 August 2023, you can continue to use the five legacy metrics without impact. You can transition to the Requests metric at any point prior to 31 August 2023.
+
+## Is my service affected by this?
+
+While your service isn't affected by this change, any tool, script, or program that uses the five retired metrics for monitoring or alert rules is affected by this change. You'll be unable to run those tools successfully unless you update the tools.
+
+## What is the deadline for the change?
+
+The five legacy metrics will no longer be available after 31 August 2023.
+
+## Required action
+
+Update any tools that use the five legacy metrics to use equivalent functionality that is provided through the Requests metric filtered on one or more dimensions. For example, filter Requests on the **GatewayResponseCode** or **GatewayResponseCodeCategory** dimension.
+
+> [!NOTE]
+> Configure filters on the Requests metric to meet your monitoring and alerting needs. For available dimensions, see [Azure Monitor metrics for API Management](../../azure-monitor/essentials/metrics-supported.md#microsoftapimanagementservice).
++
+|Legacy metric |Example replacement with Requests metric|
+|||
+|Total Gateway Requests | Requests |
+|Successful Gateway Requests | Requests<br/> Filter: GatewayResponseCode = 0-301,304,307 |
+|Unauthorized Gateway Requests | Requests<br/> Filter: GatewayResponseCode = 401,403,429 |
+|Failed Gateway Requests | Requests<br/> Filter: GatewayResponseCode = 400,500-599 |
+|Other Gateway Requests | Requests<br/> Filter: GatewayResponseCode = (all other values) |
+
+## More information
+
+* [Tutorial: Monitor published APIs](../api-management-howto-use-azure-monitor.md)
+* [Get API analytics in Azure API Management](../howto-use-analytics.md)
+* [Observability in API Management](../observability.md)
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
Previously updated : 09/07/2022 Last updated : 03/15/2023
The following table lists all the upcoming breaking changes and feature retireme
| Change Title | Effective Date | |:-|:| | [Resource provider source IP address updates][bc1] | March 31, 2023 |
+| [Metrics retirements][metrics2023] | August 31, 2023 |
| [Resource provider source IP address updates][rp2023] | September 30, 2023 | | [API version retirements][api2023] | September 30, 2023 | | [Deprecated (legacy) portal retirement][devportal2023] | October 31, 2023 |
The following table lists all the upcoming breaking changes and feature retireme
[stv12024]: ./stv1-platform-retirement-august-2024.md [msal2025]: ./identity-provider-adal-retirement-sep-2025.md [captcha2025]: ./captcha-endpoint-change-sep-2025.md
+[metrics2023]: ./metrics-retirement-aug-2023.md
api-management Rp Source Ip Address Change Mar 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar-2023.md
On 31 March, 2023 as part of our continuing work to increase the resiliency of A
This change will have NO effect on the availability of your API Management service. However, you **may** have to take steps described below to configure your API Management service beyond 31 March, 2023.
+> These changes were completed between April 1, 2023 and April 20, 2023. You can remove the IP addresses noted in the _Old IP Address_ column from your NSG.
+ ## Is my service affected by this change? Your service is impacted by this change if:
api-management Diagnostic Logs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/diagnostic-logs-reference.md
This reference describes settings for API diagnostics logging from an API Manage
| Always log errors | boolean | If this setting is enabled, all failures are logged, regardless of the **Sampling** setting. | Log client IP address | boolean | If this setting is enabled, the client IP address for API requests is logged. | | Verbosity | | Specifies the verbosity of the logs and whether custom traces that are configured in [trace](trace-policy.md) policies are logged. <br/><br/>* Error - failed requests, and custom traces of severity `error`<br/>* Information - failed and successful requests, and custom traces of severity `error` and `information`<br/> * Verbose - failed and successful requests, and custom traces of severity `error`, `information`, and `verbose`<br/><br/>Default: Information |
-| Correlation protocol | | Specifies the protocol used to correlate telemetry sent by multiple components to Application Insights. Default: Legacy <br/><br/>For information, see [Telemetry correlation in Application Insights](../azure-monitor/app/correlation.md). |
+| Correlation protocol | | Specifies the protocol used to correlate telemetry sent by multiple components to Application Insights. Default: Legacy <br/><br/>For information, see [Telemetry correlation in Application Insights](../azure-monitor/app/distributed-tracing-telemetry-correlation.md). |
| Headers to log | list | Specifies the headers that are logged for requests and responses. Default: no headers are logged. | | Number of payload bytes to log | integer | Specifies the number of initial bytes of the body that are logged for requests and responses. Default: 0 | | Frontend Request | | Specifies whether and how *frontend requests* (requests incoming to the API Management gateway) are logged.<br/><br/> If this setting is enabled, specify **Headers to log**, **Number of payload bytes to log**, or both. |
api-management Graphql Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md
API Management helps you import, manage, protect, test, publish, and monitor Gra
* GraphQL APIs are supported in all API Management service tiers * Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway
-* GraphQL subscription support in synthetic GraphQL APIs is currently in preview
+* Support for GraphQL subscriptions in synthetic GraphQL APIs is currently in preview and isn't available in the Consumption tier
## What is GraphQL?
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
description: Learn how to protect against common API-based vulnerabilities, as i
Previously updated : 05/31/2022 Last updated : 04/13/2023
The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Fo
The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP.
+> [!NOTE]
+> In addition to following the recommendations in this article, you can enable Defender for APIs (preview), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md)
+ ## Broken object level authorization API objects that aren't protected with the appropriate level of authorization may be vulnerable to data leaks and unauthorized data manipulation through weak object access identifiers. For example, an attacker could exploit an integer object identifier, which can be iterated.
More information about this threat: [API10:2019 Insufficient logging and monito
## Next steps
+Learn more about:
+ * [Authentication and authorization in API Management](authentication-authorization-overview.md) * [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline) * [Security controls by Azure policy](security-controls-policy.md) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
+* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
api-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/index.md
| [Get X-CSRF token from SAP gateway using send request policy](./get-x-csrf-token-from-sap-gateway.md) | Shows how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. | | [Route the request based on the size of its body](./route-requests-based-on-size.md) | Demonstrates how to route requests based on the size of their bodies. | | [Send request context information to the backend service](./send-request-context-info-to-backend-service.md) | Shows how to send some context information to the backend service for logging or processing. |
-| [Set response cache duration](./set-cache-duration.md) | Demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. |
| **Outbound policies** | **Description** | | [Filter response content](./filter-response-content.md) | Demonstrates how to filter data elements from the response payload based on the product associated with the request. |
+| [Set response cache duration](./set-cache-duration.md) | Demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. |
| **On-error policies** | **Description** | | [Log errors to Stackify](./log-errors-to-stackify.md) | Shows how to add an error logging policy to send errors to Stackify for logging. |
api-management Protect With Defender For Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-defender-for-apis.md
+
+ Title: Protect APIs in API Management with Defender for APIs
+description: Learn how to enable advanced API security features in Azure API Management by using Microsoft Defender for Cloud.
+++++ Last updated : 04/20/2023++
+# Enable advanced API security features using Microsoft Defender for Cloud
+<!-- Update links to D4APIs docs when available -->
+
+Defender for APIs, a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), offers full lifecycle protection, detection, and response coverage for APIs that are managed in Azure API Management. The service empowers security practitioners to gain visibility into their business-critical APIs, understand their security posture, prioritize vulnerability fixes, and detect active runtime threats within minutes.
+
+Capabilities of Defender for APIs include:
+
+* Identify external, unused, or unauthenticated APIs
+* Classify APIs that receive or respond with sensitive data
+* Apply configuration recommendations to strengthen the security posture of APIs and API Management services
+* Detect anomalous and suspicious API traffic patterns and exploits of OWASP API top 10 vulnerabilities
+* Prioritize threat remediation
+* Integrate with SIEM systems and Defender Cloud Security Posture Management
+
+This article shows how to use the Azure portal to enable Defender for APIs from your API Management instance and view a summary of security recommendations and alerts for onboarded APIs.
++
+## Preview limitations
+
+* Currently, Defender for APIs discovers and analyzes REST APIs only.
+* Defender for APIs currently doesn't onboard APIs that are exposed using the API Management [self-hosted gateway](self-hosted-gateway-overview.md) or managed using API Management [workspaces](workspaces-overview.md).
+* Some ML-based detections and security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions in [multi-region](api-management-howto-deploy-multi-region.md) deployments. Defender for APIs relies on local data pipelines to ensure regional data residency and improved performance in such deployments.ΓÇ»
+
+
+## Prerequisites
+
+* At least one API Management instance in an Azure subscription. Defender for APIs is enabled at the level of a subscription.
+* One or more supported APIs must be imported to the API Management instance.
+* Role assignment to [enable the Defender for APIs plan](/azure/defender-for-cloud/permissions).
+* Contributor or Owner role assignment on relevant Azure subscriptions, resource groups, or API Management instances that you want to secure.
+
+## Onboard to Defender for APIs
+
+Onboarding APIs to Defender for APIs is a two-step process: enabling the Defender for APIs plan for the subscription, and onboarding unprotected APIs in your API Management instances.  
+
+> [!TIP]
+> You can also onboard to Defender for APIs directly in the Defender for Cloud interface, where more API security insights and inventory experiences are available.
++
+### Enable the Defender for APIs plan for a subscription
+
+1. Sign in to the [portal](https://portal.azure.com), and go to your API Management instance.
+
+1. In the left menu, select **Microsoft Defender for Cloud (preview)**.
+
+1. Select **Enable Defender on the subscription**.
+
+ :::image type="content" source="media/protect-with-defender-for-apis/enable-defender-for-apis.png" alt-text="Screenshot showing how to enable Defender for APIs in the portal." lightbox="media/protect-with-defender-for-apis/enable-defender-for-apis.png":::
+
+1. On the **Defender plan** page, select **On** for the **APIs** plan.
+
+1. Select **Save**.
+
+### Onboard unprotected APIs to Defender for APIs
+
+> [!CAUTION]
+> Onboarding APIs to Defender for APIs may increase compute, memory, and network utilization of your API Management instance, which in extreme cases may cause an outage of the API Management instance. Do not onboard all APIs at one time if your API Management instance is running at high utilization. Use caution by gradually onboarding APIs, while monitoring the utilization of your instance (for example, using [the capacity metric](api-management-capacity.md)) and scaling out as needed.
+
+1. In the portal, go back to your API Management instance.
+1. In the left menu, select **Microsoft Defender for Cloud (preview)**.
+1. Under **Recommendations**, select **Azure API Management APIs should be onboarded to Defender for APIs**.
+ :::image type="content" source="media/protect-with-defender-for-apis/defender-for-apis-recommendations.png" alt-text="Screenshot of Defender for APIs recommendations in the portal." lightbox="media/protect-with-defender-for-apis/defender-for-apis-recommendations.png":::
+1. On the next screen, review details about the recommendation:
+ * SeverityΓÇ»
+ * Refresh interval for security findings
+ * Description and remediation steps
+ * Affected resources, classified as **Healthy** (onboarded to Defender for APIs), **Unhealthy** (not onboarded), or **Not applicable**, along with associated metadata from API Management
+
+ > [!NOTE]
+ > Affected resources include API collections (APIs) from all API Management instances under the subscription.
+
+1. From the list of **Unhealthy** resources, select the API(s) that you wish to onboard to Defender for APIs.
+1. Select **Fix**, and then select **Fix resources**.
+ :::image type="content" source="media/protect-with-defender-for-apis/fix-unhealthy-resources.png" alt-text="Screenshot of onboarding unhealthy APIs in the portal." lightbox="media/protect-with-defender-for-apis/fix-unhealthy-resources.png":::
+1. Track the status of onboarded resources under **Notifications**.
+
+> [!NOTE]
+> Defender for APIs takes 30 minutes to generate its first security insights after onboarding an API. Thereafter, security insights are refreshed every 30 minutes.
+>
+
+## View security coverage
+
+After you onboard the APIs from API Management, Defender for APIs receives API traffic that will be used to build security insights and monitor for threats. Defender for APIs generates security recommendations for risky and vulnerable APIs.
+
+You can view a summary of all security recommendations and alerts for onboarded APIs by selecting **Microsoft Defender for Cloud (preview)** in the menu for your API Management instance:
+
+1. In the portal, go to your API Management instance and select **Microsoft Defender for Cloud (preview**) from the left menu.
+1. Review **Recommendations** and **Security insights and alerts**.
+
+ :::image type="content" source="media/protect-with-defender-for-apis/view-security-insights.png" alt-text="Screenshot of API security insights in the portal." lightbox="media/protect-with-defender-for-apis/view-security-insights.png":::
+
+For the security alerts received, Defender for APIs suggests necessary steps to perform the required analysis and validate the potential exploit or anomaly associated with the APIs. Follow the steps in the security alert to fix and return the APIs to healthy status.
+
+## Offboard protected APIs from Defender for APIs
+
+You can remove APIs from protection by Defender for APIs by using Defender for Cloud in the portal. For more information, see the Microsoft Defender for Cloud documentation.
+
+## Next steps
+
+* Learn more about [Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
+* Learn how to [upgrade and scale](upgrade-and-scale.md) an API Management instance
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
Title: NAT gateway integration - Azure App Service | Microsoft Docs
+ Title: Azure NAT Gateway integration - Azure App Service | Microsoft Docs
description: Describes how NAT gateway integrates with Azure App Service.
ms.devlang: azurecli
-# Virtual Network NAT gateway integration
+# Azure NAT Gateway integration
-NAT gateway is a fully managed, highly resilient service, which can be associated with one or more subnets and ensures that all outbound Internet-facing traffic will be routed through the gateway. With App Service, there are two important scenarios that you can use NAT gateway for.
+Azure NAT gateway is a fully managed, highly resilient service, which can be associated with one or more subnets and ensures that all outbound Internet-facing traffic will be routed through the gateway. With App Service, there are two important scenarios that you can use NAT gateway for.
The NAT gateway gives you a static predictable public IP for outbound Internet-facing traffic. It also significantly increases the available [SNAT ports](../troubleshoot-intermittent-outbound-connection-errors.md) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
-For more information and pricing. Go to the [NAT gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
+For more information and pricing. Go to the [Azure NAT Gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-overview.png" alt-text="Diagram shows Internet traffic flowing to a NAT gateway in an Azure Virtual Network."::: > [!Note]
-> * Using NAT gateway with App Service is dependent on virtual network integration, and therefore a supported App Service plan pricing tier is required.
-> * When using NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint.
-> * NAT gateway cannot be used together with App Service Environment v1 or v2.
+> * Using a NAT gateway with App Service is dependent on virtual network integration, and therefore a supported App Service plan pricing tier is required.
+> * When using a NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint.
+> * A NAT gateway cannot be used together with App Service Environment v1 or v2.
## Configuring NAT gateway integration
To configure NAT gateway integration with App Service, you need to complete the
* Ensure [Route All](../overview-vnet-integration.md#routes) is enabled for your virtual network integration so the Internet bound traffic will be affected by routes in your virtual network. * Provision a NAT gateway with a public IP and associate it with the virtual network integration subnet.
-Set up NAT gateway through the portal:
+Set up Azure NAT Gateway through the portal:
1. Go to the **Networking** UI in the App Service portal and select virtual network integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled. :::image type="content" source="./media/nat-gateway-integration/nat-gateway-route-all-enabled.png" alt-text="Screenshot of Route All enabled for virtual network integration.":::
Associate the NAT gateway with the virtual network integration subnet:
az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [myVnet] --name [myIntegrationSubnet] --nat-gateway myNATgateway ```
-## Scaling NAT gateway
+## Scaling a NAT gateway
-The same NAT gateway can be used across multiple subnets in the same Virtual Network allowing a NAT gateway to be used across multiple apps and App Service plans.
+The same NAT gateway can be used across multiple subnets in the same virtual network allowing a NAT gateway to be used across multiple apps and App Service plans.
-NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability) of NAT gateway.
+Azure NAT Gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability) of Azure NAT Gateway.
## Next steps
-For more information on the NAT gateway, see [NAT gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
+For more information on Azure NAT Gateway, see [Azure NAT Gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
For more information on virtual network integration, see [Virtual network integration documentation](../overview-vnet-integration.md).
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
adobe-target-content: ./quickstart-java-uiex
## Next steps > [!div class="nextstepaction"]
-> [Connect to Azure DB for PostgreSQL with Java](../postgresql/connect-java.md)
+> [Connect to Azure Database for PostgreSQL with Java](../postgresql/connect-java.md)
> [!div class="nextstepaction"] > [Set up CI/CD](deploy-continuous-deployment.md)
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Title: Azure Automation Hybrid Runbook Worker overview
description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 03/21/2023 Last updated : 04/20/2023
For machines hosting the system Hybrid Runbook worker managed by Update Manageme
:::image type="content" source="./media/automation-hybrid-runbook-worker/system-hybrid-runbook-worker.png" alt-text="System Hybrid Runbook Worker technical diagram":::
-When you start a runbook on a user Hybrid Runbook Worker, you specify the group that it runs on. Each worker in the group polls Azure Automation to see if any jobs are available. If a job is available, the first worker to get the job takes it. The processing time of the jobs queue depends on the hybrid worker hardware profile and load. You can't specify a particular worker. Hybrid worker works on a polling mechanism (every 30 secs) and follows an order of first-come, first-serve. Depending on when a job was pushed, whichever hybrid worker pings the Automation service picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds, then there's a high possibility another hybrid worker in the Hybrid Runbook Worker group picked up the job.
+A Hybrid Worker group with Hybrid Runbook Workers is designed for high availability and load balancing by allocating jobs across multiple Workers. For a successful execution of runbooks, Hybrid Workers must be healthy and give a heartbeat. The Hybrid worker works on a polling mechanism to pick up jobs. If none of the Workers within the Hybrid Worker group has pinged Automation service in the last 30 minutes, it implies that the group did not have any active Workers. In this scenario, jobs will get suspended after three retry attempts.
+
+When you start a runbook on a user Hybrid Runbook Worker, you specify the group it runs on and can't specify a particular worker. Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid worker within the Hybrid Worker Group pings the Automation service first picks up the job. The processing time of the jobs queue also depends on the Hybrid worker hardware profile and load.
+
+­­A single hybrid worker can generally pick up 4 jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than 4 per 30 seconds and no other Worker picks up the job, the job might get suspended with an error.
A Hybrid Runbook Worker doesn't have many of the [Azure sandbox](automation-runbook-execution.md#runbook-execution-environment) resource [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) on disk space, memory, or network sockets. The limits on a hybrid worker are only related to the worker's own resources, and they aren't constrained by the [fair share](automation-runbook-execution.md#fair-share) time limit that Azure sandboxes have.
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
Previously updated : 07/27/2021 Last updated : 04/20/2023 # Encryption of secure assets in Azure Automation
-Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are protected in Azure Automation using multiple levels of encryption. Based on the top-level key used for the encryption, there are two models for encryption:
+Azure Automation secures assets such as credentials, certificates, connections, and encrypted variables are using various levels of encryption. This helps enhance the security of these assets. Additionally, to ensure greater security and privacy for the customer code, runbooks, and DSC scripts are also encrypted. Encryption in Azure Automation follows two models, depending on the top-level key used for encryption:
- Using Microsoft-managed keys - Using keys that you manage + ## Microsoft-managed Keys By default, your Azure Automation account uses Microsoft-managed keys.
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
At a high-level, the launcher performs the following sequence of steps:
12. Attempt to use the SAS token `LOGS_STORAGE_ACCOUNT_SAS` provided to create a new Storage Account container named based on `LOGS_STORAGE_CONTAINER`, in the **pre-existing** Storage Account `LOGS_STORAGE_ACCOUNT`. If Storage Account container already exists, use it. Upload all local test results and logs to this storage container as a tarball (see below). 13. Exit.
+## Tests performed per test suite
+
+There are approximately **375** unique integration tests available, across **27** test suites - each testing a separate functionality.
+
+| Suite # | Test suite name | Description of test |
+| - | | |
+| 1 | `ad-connector` | Tests the deployment and update of an Active Directory Connector (AD Connector). |
+| 2 | `billing` | Testing various Business Critical license types are reflected in resource table in controller, used for Billing upload. |
+| 3 | `ci-billing` | Similar as `billing`, but with more CPU/Memory permutations. |
+| 4 | `ci-sqlinstance` | Long running tests for multi-replica creation, updates, GP -> BC Update, Backup validation and SQL Server Agent. |
+| 5 | `controldb` | Tests Control database - SA secret check, system login verification, audit creation, and sanity checks for SQL build version. |
+| 6 | `dc-export` | Indirect Mode billing and usage upload. |
+| 7 | `direct-crud` | Creates a SQL instance using ARM calls, validates in both Kubernetes and ARM. |
+| 8 | `direct-fog` | Creates multiple SQL instances and creates a Failover Group between them using ARM calls. |
+| 9 | `direct-hydration` | Creates SQL Instance with Kubernetes API, validates presence in ARM. |
+| 10 | `direct-upload` | Validates billing upload in Direct Mode |
+| 11 | `kube-rbac` | Ensures Kubernetes Service Account permissions for Arc Data Services matches least-privilege expectations. |
+| 12 | `nonroot` | Ensures containers run as non-root user |
+| 13 | `postgres` | Completes various Postgres creation, scaling, backup/restore tests. |
+| 14 | `release-sanitychecks` | Sanity checks for month-to-month releases, such as SQL Server Build versions. |
+| 15 | `sqlinstance` | Shorter version of `ci-sqlinstance`, for fast validations. |
+| 16 | `sqlinstance-ad` | Tests creation of SQL Instances with Active Directory Connector. |
+| 17 | `sqlinstance-credentialrotation` | Tests automated Credential Rotation for both General Purpose and Business Critical. |
+| 18 | `sqlinstance-ha` | Various High Availability Stress tests, including pod reboots, forced failovers and suspensions. |
+| 19 | `sqlinstance-tde` | Various Transparent Data Encryption tests. |
+| 20 | `telemetry-elasticsearch` | Validates Log ingestion into Elasticsearch. |
+| 21 | `telemetry-grafana` | Validates Grafana is reachable. |
+| 22 | `telemetry-influxdb` | Validates Metric ingestion into InfluxDB. |
+| 23 | `telemetry-kafka` | Various tests for Kafka using SSL, single/multi-broker setup. |
+| 24 | `telemetry-monitorstack` | Tests Monitoring components, such as `Fluentbit` and `Collectd` are functional. |
+| 25 | `telemetry-telemetryrouter` | Tests Open Telemetry. |
+| 26 | `telemetry-webhook` | Tests Data Services Webhooks with valid and invalid calls. |
+| 27 | `upgrade-arcdata` | Upgrades a full suite of SQL Instances (GP, BC 2 replica, BC 3 replica, with Active Directory) and upgrades from last month's release to latest build. |
+
+As an example, for `sqlinstance-ha`, the following tests are performed:
+
+- `test_critical_configmaps_present`: Ensures the ConfigMaps and relevant fields are present for a SQL Instance.
+- `test_suspended_system_dbs_auto_heal_by_orchestrator`: Ensures if `master` and `msdb` are suspended by any means (in this case, user). Orchestrator maintenance reconcile auto-heals it.
+- `test_suspended_user_db_does_not_auto_heal_by_orchestrator`: Ensures if a User Database is deliberately suspended by user, Orchestrator maintenance reconcile does not auto-heal it.
+- `test_delete_active_orchestrator_twice_and_delete_primary_pod`: Deletes orchestrator pod multiple times, followed by the primary replica, and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.
+- `test_delete_primary_pod`: Deletes primary replica and verifies all replicas are synchronized. Failover time expectations for 2 replica are relaxed.
+- `test_delete_primary_and_orchestrator_pod`: Deletes primary replica and orchestrator pod and verifies all replicas are synchronized.
+- `test_delete_primary_and_controller`: Deletes primary replica and data controller pod and verifies primary endpoint is accessible and the new primary replica is synchronized. Failover time expectations for 2 replica are relaxed.
+- `test_delete_one_secondary_pod`: Deletes secondary replica and data controller pod and verifies all replicas are synchronized.
+- `test_delete_two_secondaries_pods`: Deletes secondary replicas and data controller pod and verifies all replicas are synchronized.
+- `test_delete_controller_orchestrator_secondary_replica_pods`:
+- `test_failaway`: Forces AG failover away from current primary, ensures the new primary is not the same as the old primary. Verifies all replicas are synchronized.
+- `test_update_while_rebooting_all_non_primary_replicas`: Tests Controller-driven updates are resilient with retries despite various turbulent circumstances.
+
+> [!NOTE]
+> Certain tests may require specific hardware, such as privileged Access to Domain Controllers for `ad` tests for Account and DNS entry creation - which may not be available in all environments looking to use the `arc-ci-launcher`.
+ ## Examining Test Results A sample storage container and file uploaded by the launcher:
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 03/06/2023--- Last updated : 04/21/2023+ # Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters
Capabilities of the Azure Key Vault Secrets Provider extension include:
## Install the Azure Key Vault Secrets Provider extension on an Arc-enabled Kubernetes cluster
-You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying ARM template.
+You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying an ARM template.
> [!TIP] > If the cluster is behind an outbound proxy server, ensure that you connect it to Azure Arc using the [proxy configuration](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) option before installing the extension.
You can install the Azure Key Vault Secrets Provider extension on your connected
az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider ```
-You should see output similar to this example. Note that it may take several minutes before the secrets provider Helm chart is deployed to the cluster.
+You should see output similar to this example. It may take several minutes before the secrets provider Helm chart is deployed to the cluster.
```json {
You should see output similar to this example.
## Create or select an Azure Key Vault
-Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your Key Vault must be globally unique.
+Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your key vault must be globally unique.
Set the following environment variables:
export AZUREKEYVAULT_NAME=<AKV-name>
export AZUREKEYVAULT_LOCATION=<AKV-location> ```
-Next, run the following command
+Next, run the following command:
```azurecli az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed
After the pod starts, the mounted content at the volume path specified in your deployment YAML is available.
-```Bash
+```bash
## show secrets held in secrets-store kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
The following configuration settings are frequently used with the Azure Key Vaul
| Configuration Setting | Default | Description | | | -- | -- | | enableSecretRotation | false | Boolean type. If `true`, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store |
-| rotationPollInterval | 2m | If `enableSecretRotation` is `true`, specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. |
+| rotationPollInterval | 2 m | If `enableSecretRotation` is `true`, specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. |
| syncSecret.enabled | false | Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If `true`, `SecretProviderClass` allows the `secretObjects` field to define the desired state of the synced Kubernetes Secret objects. | These settings can be specified when the extension is installed by using the `az k8s-extension create` command:
You can use other configuration settings as needed for your deployment. For exam
az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet ``` - ## Uninstall the Azure Key Vault Secrets Provider extension To uninstall the extension, run the following command:
az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_N
If the extension was successfully removed, you won't see the Azure Key Vault Secrets Provider extension listed in the output. If you don't have any other extensions installed on your cluster, you'll see an empty array.
+If you no longer need it, be sure to delete the Kubernetes secret associated with the service principal by running the following command:
+
+```bash
+kubectl delete secret secrets-store-creds
+```
+ ## Reconciliation and troubleshooting The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component will be reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-extension create` command again with the existing extension instance name.
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
def main(changes):
## Attributes
-The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
| Attribute property |Description| |||
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourc
``` Press return to execute the code. You should see a 200 response, and details about the table you just created will show up. To validate that the table was created go to your workspace and select Tables on the left blade. You should see your table in the list.
+> [!Note]
+> The column names are case sensitive. For example Rawdata will not correcly collect the event data. It must be RawData.
## Create data collection rule to collect text logs
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
+
+ Title: Application Insights with containers
+description: This article shows you how to set-up Application Insights
+ Last updated : 04/06/2023
+ms.devlang: java
++++
+# Get Started (Supplemental)
+
+In the following sections, you will find information on how to get Java auto-instrumentation for specific technical environments.
+
+## Azure App Service
+
+For more information, see [Application monitoring for Azure App Service and Java](./azure-web-apps-java.md).
+
+## Azure Functions
+
+For more information, see [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-preview).
+
+## Containers
+
+### Docker entry point
+
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+
+```
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.11.jar", "-jar", "<myapp.jar>"]
+```
+
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` somewhere before `-jar`, for example:
+
+```
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.11.jar" -jar <myapp.jar>
+```
++
+### Docker file
+
+A Dockerfile example:
+
+```
+FROM ...
+
+COPY target/*.jar app.jar
+
+COPY agent/applicationinsights-agent-3.4.11.jar applicationinsights-agent-3.4.11.jar
+
+COPY agent/applicationinsights.json applicationinsights.json
+
+ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
+
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.11.jar", "-jar", "app.jar"]
+```
+
+### Third-party container images
+
+If you're using a third-party container image that you can't modify, mount the Application Insights Java agent jar into the container from outside. Set the environment variable for the container
+`JAVA_TOOL_OPTIONS=-javaagent:/path/to/applicationinsights-agent.jar`.
+
+## Spring Boot
+
+For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
+
+## Java Application servers
+
+### Tomcat 8 (Linux)
+
+#### Tomcat installed via apt-get or yum
+
+If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file:
+
+```
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+```
+
+#### Tomcat installed via download and unzip
+
+If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content:
+
+```
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+```
+
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to `CATALINA_OPTS`.
+
+### Tomcat 8 (Windows)
+
+#### Run Tomcat from the command line
+
+Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content:
+
+```
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.11.jar
+```
+
+Quotes aren't necessary, but if you want to include them, the proper placement is:
+
+```
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+```
+
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to `CATALINA_OPTS`.
+
+#### Run Tomcat as a Windows service
+
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the `Java Options` under the `Java` tab.
+
+### JBoss EAP 7
+
+#### Standalone server
+
+Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+
+```java ...
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.11.jar -Xms1303m -Xmx1303m ..."
+ ...
+```
+
+#### Domain server
+
+Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+
+```xml
+...
+<jvms>
+ <jvm name="default">
+ <heap size="64m" max-size="256m"/>
+ <jvm-options>
+ <option value="-server"/>
+ <!--Add Java agent jar file here-->
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.11.jar"/>
+ <option value="-XX:MetaspaceSize=96m"/>
+ <option value="-XX:MaxMetaspaceSize=256m"/>
+ </jvm-options>
+ </jvm>
+</jvms>
+...
+```
+
+If you're running multiple managed servers on a single host, you'll need to add `applicationinsights.agent.id` to the `system-properties` for each `server`:
+
+```xml
+...
+<servers>
+ <server name="server-one" group="main-server-group">
+ <!--Edit system properties for server-one-->
+ <system-properties>
+ <property name="applicationinsights.agent.id" value="..."/>
+ </system-properties>
+ </server>
+ <server name="server-two" group="main-server-group">
+ <socket-bindings port-offset="150"/>
+ <!--Edit system properties for server-two-->
+ <system-properties>
+ <property name="applicationinsights.agent.id" value="..."/>
+ </system-properties>
+ </server>
+</servers>
+...
+```
+
+The specified `applicationinsights.agent.id` value must be unique. It's used to create a subdirectory under the Application Insights directory. Each JVM process needs its own local Application Insights config and local Application Insights log file. Also, if reporting to the central collector, the `applicationinsights.properties` file is shared by the multiple managed servers, so the specified `applicationinsights.agent.id` is needed to override the `agent.id` setting in that shared file. The `applicationinsights.agent.rollup.id` can be similarly specified in the server's `system-properties` if you need to override the `agent.rollup.id` setting per managed server.
+
+### Jetty 9
+
+Add these lines to `start.ini`:
+
+```
+--exec
+-javaagent:path/to/applicationinsights-agent-3.4.11.jar
+```
+
+### Payara 5
+
+Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+
+```xml
+...
+<java-config ...>
+ <!--Edit the JVM options here-->
+ <jvm-options>
+ -javaagent:path/to/applicationinsights-agent-3.4.11.jar>
+ </jvm-options>
+ ...
+</java-config>
+...
+```
+
+### WebSphere 8
+
+1. Open Management Console.
+1. Go to **Servers** > **WebSphere application servers** > **Application servers**. Choose the appropriate application servers and select:
+
+ ```
+ Java and Process Management > Process definition > Java Virtual Machine
+ ```
+
+1. In `Generic JVM arguments`, add the following JVM argument:
+
+ ```
+ -javaagent:path/to/applicationinsights-agent-3.4.11.jar
+ ```
+
+1. Save and restart the application server.
+
+### OpenLiberty 18
+
+Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line:
+
+```
+-javaagent:path/to/applicationinsights-agent-3.4.11.jar
+```
+
+### Others
+
+See your application server documentation on how to add JVM args.
+
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
In this article, we cover the Click Analytics plug-in that automatically tracks
## Get started
-Users can set up the Click Analytics Autocollection plug-in via npm.
+Users can set up the Click Analytics Auto-Collection plug-in via snippet or NPM.
-### npm setup
-
-Install the npm package:
-
-```bash
-npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web
-```
-
-```js
-
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
-
-const clickPluginInstance = new ClickAnalyticsPlugin();
-// Click Analytics configuration
-const clickPluginConfig = {
- autoCapture: true
-};
-// Application Insights Configuration
-const configObj = {
- connectionString: "YOUR CONNECTION STRING",
- extensions: [clickPluginInstance],
- extensionConfig: {
- [clickPluginInstance.identifier]: clickPluginConfig
- },
-};
-
-const appInsights = new ApplicationInsights({ config: configObj });
-appInsights.loadAppInsights();
-```
-
-## Snippet setup
+### Snippet setup
Ignore this setup if you use the npm setup.
Ignore this setup if you use the npm setup.
</script> ```
+### npm setup
+
+Install the npm package:
+
+```bash
+npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web
+```
+
+```js
+
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
+
+const clickPluginInstance = new ClickAnalyticsPlugin();
+// Click Analytics configuration
+const clickPluginConfig = {
+ autoCapture: true
+};
+// Application Insights Configuration
+const configObj = {
+ connectionString: "YOUR CONNECTION STRING",
+ extensions: [clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ },
+};
+
+const appInsights = new ApplicationInsights({ config: configObj });
+appInsights.loadAppInsights();
+```
+ ## Use the plug-in 1. Telemetry data generated from the click events are stored as `customEvents` in the Application Insights section of the Azure portal.
The following key properties are captured by default when the plug-in is enabled
| | |--| | timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 |
-## Configuration
+## Advanced configuration
| Name | Type | Default | Description | | | --| --| - | | auto-Capture | Boolean | True | Automatic capture configuration. | | callback | [IValueCallback](#ivaluecallback) | Null | Callbacks configuration. |
-| pageTags | String | Null | Page tags. |
+| pageTags | Object | Null | Page tags. |
| dataTags | [ICustomDataTags](#icustomdatatags)| Null | Custom Data Tags provided to override default tags used to capture click data. | | urlCollectHash | Boolean | False | Enables the logging of values after a "#" character of the URL. | | urlCollectQuery | Boolean | False | Enables the logging of the query string of the URL. |
var appInsights = new Microsoft.ApplicationInsights.ApplicationInsights({
appInsights.loadAppInsights(); ```
-## Enable correlation
-
-Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-
-JavaScript correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
- ## Sample app [Simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Java auto-instrumentation is enabled through configuration changes; no code chan
Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to your application's JVM args. > [!TIP]
-> For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
-
-If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
+> For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
+
+> [!TIP]
+> If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
##### [Node.js](#tab/nodejs)
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
To create a custom workbook based on any of these workbooks, select the **View W
- IPs assigned to a pod. >[!NOTE]
-> By default 16 IP's are allocated from subnet to each node. This cannot be modified to be less than 16. For instructions on how to enable subnet IP usage metrics, see [Monitor IP Subnet Usage](../../aks/configure-azure-cni.md#monitor-ip-subnet-usage).
+> By default 16 IP's are allocated from subnet to each node. This cannot be modified to be less than 16. For instructions on how to enable subnet IP usage metrics, see [Monitor IP Subnet Usage](../../aks/configure-azure-cni-dynamic-ip-allocation.md#monitor-ip-subnet-usage).
## Resource Monitoring workbooks
azure-monitor App Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-expression.md
description: The app expression is used in an Azure Monitor log query to retriev
Previously updated : 08/06/2022 Last updated : 04/20/2023
The `app` expression is used in an Azure Monitor query to retrieve data from a s
| Identifier | Description | Example |:|:|:|
-| Resource Name | Human readable name of the app (Also known as "component name") | app("fabrikamapp") |
-| Qualified Name | Full name of the app in the form: "subscriptionName/resourceGroup/componentName" | app('AI-Prototype/Fabrikam/fabrikamapp') |
-| ID | GUID of the app | app("988ba129-363e-4415-8fe7-8cbab5447518") |
-| Azure Resource ID | Identifier for the Azure resource |app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
+| ID | GUID of the app | app("00000000-0000-0000-0000-000000000000") |
+| Azure Resource ID | Identifier for the Azure resource |app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
## Notes * You must have read access to the application.
-* Identifying an application by its name assumes that it is unique across all accessible subscriptions. If you have multiple applications with the specified name, the query will fail because of the ambiguity. In this case you must use one of the other identifiers.
+* Identifying an application by its ID or Azure Resource ID is strongly recommended since unique, removes ambiguity, and more performant.
* Use the related expression [workspace](../logs/workspace-expression.md) to query across Log Analytics workspaces. ## Examples ```Kusto
-app("fabrikamapp").requests | count
+app("00000000-0000-0000-0000-000000000000").requests | count
``` ```Kusto
-app("AI-Prototype/Fabrikam/fabrikamapp").requests | count
-```
-```Kusto
-app("b438b4f6-912a-46d5-9cb1-b44069212ab4").requests | count
-```
-```Kusto
-app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
+app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
``` ```Kusto union
-(workspace("myworkspace").Heartbeat | where Computer contains "Con"),
-(app("myapplication").requests | where cloud_RoleInstance contains "Con")
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
+(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myColumnInstance")
| count ``` ```Kusto union
-(workspace("myworkspace").Heartbeat), (app("myapplication").requests)
-| where TimeGenerated between(todatetime("2018-02-08 15:00:00") .. todatetime("2018-12-08 15:05:00"))
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests)
+| where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
``` ## Next steps
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Previously updated : 11/09/2022 Last updated : 04/17/2023 # Set a table's log data plan to Basic or Analytics
Configure a table for Basic logs if:
| Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+ | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) |
| Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) | | Health Data | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) |
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
You can identify a workspace in one of several ways:
### Identify an application The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights.
-You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following names or IDs:
+You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following IDs:
* **ID**: This ID is the app GUID of the application.
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
Optimized queries will:
- Run faster and reduce overall duration of the query execution. - Have smaller chance of being throttled or rejected.
-Pay particular attention to queries that are used for recurrent and bursty usage, such as dashboards, alerts, Azure Logic Apps, and Power BI. The impact of an ineffective query in these cases is substantial.
+Pay particular attention to queries that are used for recurrent and simultaneous usage, such as dashboards, alerts, Azure Logic Apps, and Power BI. The impact of an ineffective query in these cases is substantial.
Here's a detailed video walkthrough on optimizing queries.
A query that spans more than five workspaces is considered a query that consumes
> [!IMPORTANT] > - In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
-> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Resource Manager resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
+> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Azure Resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
## Parallelism Azure Monitor Logs uses large clusters of Azure Data Explorer to run queries. These clusters vary in scale and potentially get up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
Title: Save a query in Azure Monitor Log Analytics (preview)
+ Title: Save a query in Azure Monitor Log Analytics
description: This article describes how to save a query in Log Analytics.
Last updated 06/22/2022
-# Save a query in Azure Monitor Log Analytics (preview)
+# Save a query in Azure Monitor Log Analytics
[Log queries](log-query-overview.md) are requests in Azure Monitor that you can use to process and retrieve data in a Log Analytics workspace. Saving a log query allows you to: - Use the query in all Log Analytics contexts, including workspace and resource centric.
azure-monitor Workspace Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-expression.md
description: The workspace expression is used in an Azure Monitor log query to r
Previously updated : 08/06/2022 Last updated : 04/20/2023
The `workspace` expression is used in an Azure Monitor query to retrieve data fr
| Identifier | Description | Example |:|:|:|
-| Resource Name | Human readable name of the workspace (also known as "component name") | workspace("contosoretail") |
-| Qualified Name | Full name of the workspace in the form: "subscriptionName/resourceGroup/componentName" | workspace('Contoso/ContosoResource/ContosoWorkspace') |
-| ID | GUID of the workspace | workspace("b438b3f6-912a-46d5-9db1-b42069242ab4") |
-| Azure Resource ID | Identifier for the Azure resource | workspace("/subscriptions/e4227-645-44e-9c67-3b84b5982/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail") |
+| ID | GUID of the workspace | workspace("00000000-0000-0000-0000-000000000000") |
+| Azure Resource ID | Identifier for the Azure resource | workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail") |
## Notes * You must have read access to the workspace.
+* Identifying a workspace by its ID or Azure Resource ID is strongly recommended since unique, removes ambiguity, and more performant.
* A related expression is `app` that allows you to query across Application Insights applications. ## Examples ```Kusto
-workspace("contosoretail").Update | count
+workspace("00000000-0000-0000-0000-000000000000").Update | count
``` ```Kusto
-workspace("b438b4f6-912a-46d5-9cb1-b44069212ab4").Update | count
-```
-```Kusto
-workspace("/subscriptions/e427267-5645-4c4e-9c67-3b84b59a6982/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count
+workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count
``` ```Kusto union
-(workspace("myworkspace").Heartbeat | where Computer contains "Con"),
-(app("myapplication").requests | where cloud_RoleInstance contains "Con")
+( workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
+(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myRoleInstance")
| count ``` ```Kusto union
-(workspace("myworkspace").Heartbeat), (app("myapplication").requests)
-| where TimeGenerated between(todatetime("2018-02-08 15:00:00") .. todatetime("2018-12-08 15:05:00"))
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests) | where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
``` ## Next steps
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
For general Profiler troubleshooting, refer to the [Profiler Troubleshoot documentation](profiler-troubleshooting.md).
-For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
## Frequently asked questions
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
For more information, refer to [Network security: Configure encryption types allowed for Kerberos](/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) or [Windows Configurations for Kerberos Supported Encryption Types](/archive/blogs/openspecification/windows-configurations-for-kerberos-supported-encryption-type)
+* LDAP queries take effect only in the domain specified in the Active Directory connections (the **AD DNS Domain Name** field). This behavior applies to NFS, SMB, and dual-protocol volumes.
+ ## Create an Active Directory connection 1. From your NetApp account, select **Active Directory connections**, then select **Join**.
azure-netapp-files Faq Data Migration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-data-migration-protection.md
You can also use a wide array of free tools to copy data. For NFS, you can use w
The requirements for replicating an Azure NetApp Files volume to another Azure region are as follows: - Ensure Azure NetApp Files is available in the target Azure region.-- Validate network connectivity between VNets in each region. Currently, global peering between VNets is not supported. You can establish connectivity between VNets by linking with an ExpressRoute circuit or using a S2S VPN connection.
+- Validate network connectivity between the source and the Azure NetApp Files target volume IP address. Data transfer between on premises and Azure NetApp Files volumes, or across Azure regions, is supported via [site-to-site VPN and ExpressRoute](azure-netapp-files-network-topologies.md#hybrid-environments), [Global VNet peering](azure-netapp-files-network-topologies.md#global-or-cross-region-vnet-peering), or [Azure Virtual WAN connections](configure-virtual-wan.md).
- Create the target Azure NetApp Files volume. - Transfer the source data to the target volume by using your preferred file copy tool.
azure-resource-manager Bicep Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-scope.md
A common use of the resourceGroup function is to create resources in the same lo
param location string = resourceGroup().location ```
-You can also use the resourceGroup function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources.md#apply-tags-from-resource-group).
+You can also use the resourceGroup function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources-bicep.md#apply-tags-from-resource-group).
## subscription
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-declaration.md
az provider show \
## Tags
-You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources.md#arm-templates).
+You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources-bicep.md).
## Managed identities for Azure resources
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
mrgpath="/subscriptions/$subid/resourceGroups/$mrgname"
The `mrgprefix` and `mrgtimestamp` variables are concatenated to create a managed resource group name like _mrg-sampleManagedApplication-20230310100148_ that's stored in the `mrgname` variable. The name's format:`mrg-{definitionName}-{dateTime}` is the same format as the portal's default value. The `mrgname` and `subid` variable's are concatenated to create the `mrgpath` variable value that creates the managed resource group during the deployment.
-You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. The PowerShell escape character for the quote marks is the backslash (`\`) character. The backslash is also used for line continuation so that commands can use multiple lines.
+You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. In Bash, the escape character for the quote marks is the backslash (`\`) character. The backslash is also used for line continuation so that commands can use multiple lines.
The JSON formatted string's syntax is as follows:
The JSON formatted string's syntax is as follows:
"{ \"parameterName\": {\"value\":\"parameterValue\"}, \"parameterName\": {\"value\":\"parameterValue\"} }" ```
-For readability, the completed JSON string uses the backtick for line continuation. The values are stored in the `params` variable that's used in the deployment command. The parameters in the JSON string are required to deploy the managed resources.
+For readability, the completed JSON string uses the backslash for line continuation. The values are stored in the `params` variable that's used in the deployment command. The parameters in the JSON string are required to deploy the managed resources.
```azurecli params="{ \"appServicePlanName\": {\"value\":\"demoAppServicePlan\"}, \
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
Title: Delete resource group and resources
description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Last updated 04/10/2023-+ # Azure Resource Manager resource group and resource deletion
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock
description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Last updated 04/06/2023-+ # Lock your resources to protect your infrastructure
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resource groups
-You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md#azure-cli).
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources-cli.md).
## Export resource groups to templates
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
For more information, see [Lock resources to prevent unexpected changes](lock-re
## Tag resource groups
-You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md#portal).
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources-portal.md).
## Export resource groups to templates
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resource groups
-You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources.md#powershell).
+You can apply tags to resource groups and resources to logically organize your assets. For information, see [Using tags to organize your Azure resources](tag-resources-powershell.md).
## Export resource groups to templates
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
Title: Manage resource groups - Python
description: Use Python to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. -+ Last updated 02/27/2023
azure-resource-manager Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-cli.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resources
-Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources.md#azure-cli).
+Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources-cli.md).
## Manage access to resources
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md
Tagging helps organizing your resource group and resources logically.
![tag azure resource](./media/manage-resources-portal/manage-azure-resources-portal-tag-resource.png) 3. Specify the tag properties, and then select **Save**.
-For information, see [Using tags to organize your Azure resources](tag-resources.md#portal).
+For information, see [Using tags to organize your Azure resources](tag-resources-portal.md).
## Monitor resources
azure-resource-manager Manage Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-powershell.md
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resources
-Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources.md#powershell).
+Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources-powershell.md).
## Manage access to resources
azure-resource-manager Tag Resources Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-bicep.md
+
+ Title: Tag resources, resource groups, and subscriptions with Bicep
+description: Shows how to use Bicep to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Bicep
+
+This article describes how to use Bicep to tag resources, resource groups, and subscriptions during deployment. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+> [!NOTE]
+> The tags you apply through a Bicep file overwrite any existing tags.
+
+## Apply values
+
+The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
+
+```Bicep
+param location string = resourceGroup().location
+param utcShort string = utcNow('d')
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ Dept: 'Finance'
+ Environment: 'Production'
+ LastDeployed: utcShort
+ }
+}
+```
+
+## Apply an object
+
+You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
+
+```Bicep
+param location string = resourceGroup().location
+param tagValues object = {
+ Dept: 'Finance'
+ Environment: 'Production'
+}
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: tagValues
+}
+```
+
+## Apply a JSON string
+
+To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
+
+```Bicep
+param location string = resourceGroup().location
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ CostCenter: '{"Dept":"Finance","Environment":"Production"}'
+ }
+}
+```
+
+## Apply tags from resource group
+
+To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+
+```Bicep
+param location string = resourceGroup().location
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ Dept: resourceGroup().tags['Dept']
+ Environment: resourceGroup().tags['Environment']
+ }
+}
+```
+
+## Apply tags to resource groups or subscriptions
+
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
+
+```Bicep
+param tagName string = 'TeamName'
+param tagValue string = 'AppTeam1'
+
+resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
+ name: 'default'
+ properties: {
+ tags: {
+ '${tagName}': tagValue
+ }
+ }
+}
+```
+
+The following Bicep adds the tags from an object to the subscription it's deployed to. For more information about subscription deployments, see [Create resource groups and resources at the subscription level](../bicep/deploy-to-subscription.md).
+
+```Bicep
+targetScope = 'subscription'
+
+param tagObject object = {
+ TeamName: 'AppTeam1'
+ Dept: 'Finance'
+ Environment: 'Production'
+}
+
+resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
+ name: 'default'
+ properties: {
+ tags: tagObject
+ }
+}
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-cli.md
+
+ Title: Tag resources, resource groups, and subscriptions with Azure CLI
+description: Shows how to use Azure CLI to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Azure CLI
+
+This article describes how to use Azure CLI to tag resources, resource groups, and subscriptions. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+## Apply tags
+
+Azure CLI offers two commands to apply tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You need to have the Azure CLI 2.10.0 version or later. You can check your version with `az version`. To update or install it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+The `az tag create` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
+
+The following example applies a set of tags to a storage account:
+
+```azurecli-interactive
+resource=$(az resource show -g demoGroup -n demostorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
+az tag create --resource-id $resource --tags Dept=Finance Status=Normal
+```
+
+When the command completes, notice that the resource has two tags.
+
+```output
+"properties": {
+ "tags": {
+ "Dept": "Finance",
+ "Status": "Normal"
+ }
+},
+```
+
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
+
+```azurecli-interactive
+az tag create --resource-id $resource --tags Team=Compliance Environment=Production
+```
+
+```output
+"properties": {
+ "tags": {
+ "Environment": "Production",
+ "Team": "Compliance"
+ }
+},
+```
+
+To add tags to a resource that already has tags, use `az tag update`. Set the `--operation` parameter to `Merge`.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Merge --tags Dept=Finance Status=Normal
+```
+
+Notice that the existing tags grow with the addition of the two new tags.
+
+```output
+"properties": {
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "Status": "Normal",
+ "Team": "Compliance"
+ }
+},
+```
+
+Each tag name can have only one value. If you provide a new value for a tag, the new tag replaces the old value, even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Merge --tags Status=Green
+```
+
+```output
+"properties": {
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "Status": "Green",
+ "Team": "Compliance"
+ }
+},
+```
+
+When you set the `--operation` parameter to `Replace`, the new set of tags replaces the existing tags.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Replace --tags Project=ECommerce CostCenter=00123 Team=Web
+```
+
+Only the new tags remain on the resource.
+
+```output
+"properties": {
+ "tags": {
+ "CostCenter": "00123",
+ "Project": "ECommerce",
+ "Team": "Web"
+ }
+},
+```
+
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
+
+To add a new set of tags to a resource group, use:
+
+```azurecli-interactive
+group=$(az group show -n demoGroup --query id --output tsv)
+az tag create --resource-id $group --tags Dept=Finance Status=Normal
+```
+
+To update the tags for a resource group, use:
+
+```azurecli-interactive
+az tag update --resource-id $group --operation Merge --tags CostCenter=00123 Environment=Production
+```
+
+To add a new set of tags to a subscription, use:
+
+```azurecli-interactive
+sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
+az tag create --resource-id /subscriptions/$sub --tags CostCenter=00123 Environment=Dev
+```
+
+To update the tags for a subscription, use:
+
+```azurecli-interactive
+az tag update --resource-id /subscriptions/$sub --operation Merge --tags Team="Web Apps"
+```
+
+## List tags
+
+To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass the resource ID of the entity.
+
+To see the tags for a resource, use:
+
+```azurecli-interactive
+resource=$(az resource show -g demoGroup -n demostorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
+az tag list --resource-id $resource
+```
+
+To see the tags for a resource group, use:
+
+```azurecli-interactive
+group=$(az group show -n demoGroup --query id --output tsv)
+az tag list --resource-id $group
+```
+
+To see the tags for a subscription, use:
+
+```azurecli-interactive
+sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
+az tag list --resource-id /subscriptions/$sub
+```
+
+## List by tag
+
+To get resources that have a specific tag name and value, use:
+
+```azurecli-interactive
+az resource list --tag CostCenter=00123 --query [].name
+```
+
+To get resources that have a specific tag name with any tag value, use:
+
+```azurecli-interactive
+az resource list --tag Team --query [].name
+```
+
+To get resource groups that have a specific tag name and value, use:
+
+```azurecli-interactive
+az group list --tag Dept=Finance
+```
+
+## Remove tags
+
+To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass the resource ID of the tags you want to delete.
+
+```azurecli-interactive
+az tag update --resource-id $resource --operation Delete --tags Project=ECommerce Team=Web
+```
+
+You've removed the specified tags.
+
+```output
+"properties": {
+ "tags": {
+ "CostCenter": "00123"
+ }
+},
+```
+
+To remove all tags, use the [az tag delete](/cli/azure/tag#az-tag-delete) command.
+
+```azurecli-interactive
+az tag delete --resource-id $resource
+```
+
+## Handling spaces
+
+If your tag names or values include spaces, enclose them in quotation marks.
+
+```azurecli-interactive
+az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US"
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-portal.md
+
+ Title: Tag resources, resource groups, and subscriptions with Azure portal
+description: Shows how to use Azure portal to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Azure portal
+
+This article describes how to use the Azure portal to tag resources. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+## Add tags
+
+If a user doesn't have the required access for adding tags, you can assign the **Tag Contributor** role to the user. For more information, see [Tutorial: Grant a user access to Azure resources using RBAC and the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md).
+
+1. To view the tags for a resource or a resource group, look for existing tags in the overview. If you have not previously applied tags, the list is empty.
+
+ ![View tags for resource or resource group](./media/tag-resources-portal/view-tags.png)
+
+1. To add a tag, select **Click here to add tags**.
+
+1. Provide a name and value.
+
+ ![Add tag](./media/tag-resources-portal/add-tag.png)
+
+1. Continue adding tags as needed. When done, select **Save**.
+
+ ![Save tags](./media/tag-resources-portal/save-tags.png)
+
+1. The tags are now displayed in the overview.
+
+ ![Show tags](./media/tag-resources-portal/view-new-tags.png)
+
+## Edit tags
+
+1. To add or delete a tag, select **change**.
+
+1. To delete a tag, select the trash icon. Then, select **Save**.
+
+ ![Delete tag](./media/tag-resources-portal/delete-tag.png)
+
+## Add tags to multiple resources
+
+To bulk assign tags to multiple resources:
+
+1. From any list of resources, select the checkbox for the resources you want to assign the tag. Then, select **Assign tags**.
+
+ ![Select multiple resources](./media/tag-resources-portal/select-multiple-resources.png)
+
+1. Add names and values. When done, select **Save**.
+
+ ![Select assign](./media/tag-resources-portal/select-assign.png)
+
+## View resources by tag
+
+To view all resources with a tag:
+
+1. On the Azure portal menu, search for **tags**. Select it from the available options.
+
+ ![Find by tag](./media/tag-resources-portal/find-tags-general.png)
+
+1. Select the tag for viewing resources.
+
+ ![Select tag](./media/tag-resources-portal/select-tag.png)
+
+1. All resources with that tag are displayed.
+
+ ![View resources by tag](./media/tag-resources-portal/view-resources-by-tag.png)
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-powershell.md
+
+ Title: Tag resources, resource groups, and subscriptions with Azure PowerShell
+description: Shows how to use Azure PowerShell to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with Azure PowerShell
+
+This article describes how to use Azure PowerShell to tag resources, resource groups, and subscriptions. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+## Apply tags
+
+Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
+
+The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
+
+The following example applies a set of tags to a storage account:
+
+```azurepowershell-interactive
+$tags = @{"Dept"="Finance"; "Status"="Normal"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+New-AzTag -ResourceId $resource.id -Tag $tags
+```
+
+When the command completes, notice that the resource has two tags.
+
+```output
+Properties :
+ Name Value
+ ====== =======
+ Dept Finance
+ Status Normal
+```
+
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
+
+```azurepowershell-interactive
+$tags = @{"Team"="Compliance"; "Environment"="Production"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+New-AzTag -ResourceId $resource.id -Tag $tags
+```
+
+```output
+Properties :
+ Name Value
+ =========== ==========
+ Environment Production
+ Team Compliance
+```
+
+To add tags to a resource that already has tags, use `Update-AzTag`. Set the `-Operation` parameter to `Merge`.
+
+```azurepowershell-interactive
+$tags = @{"Dept"="Finance"; "Status"="Normal"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
+```
+
+Notice that the existing tags grow with the addition of the two new tags.
+
+```output
+Properties :
+ Name Value
+ =========== ==========
+ Status Normal
+ Dept Finance
+ Team Compliance
+ Environment Production
+```
+
+Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+
+```azurepowershell-interactive
+$tags = @{"Status"="Green"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
+```
+
+```output
+Properties :
+ Name Value
+ =========== ==========
+ Status Green
+ Dept Finance
+ Team Compliance
+ Environment Production
+```
+
+When you set the `-Operation` parameter to `Replace`, the new set of tags replaces the existing tags.
+
+```azurepowershell-interactive
+$tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Replace
+```
+
+Only the new tags remain on the resource.
+
+```output
+Properties :
+ Name Value
+ ========== =========
+ CostCenter 00123
+ Team Web
+ Project ECommerce
+```
+
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
+
+To add a new set of tags to a resource group, use:
+
+```azurepowershell-interactive
+$tags = @{"Dept"="Finance"; "Status"="Normal"}
+$resourceGroup = Get-AzResourceGroup -Name demoGroup
+New-AzTag -ResourceId $resourceGroup.ResourceId -tag $tags
+```
+
+To update the tags for a resource group, use:
+
+```azurepowershell-interactive
+$tags = @{"CostCenter"="00123"; "Environment"="Production"}
+$resourceGroup = Get-AzResourceGroup -Name demoGroup
+Update-AzTag -ResourceId $resourceGroup.ResourceId -Tag $tags -Operation Merge
+```
+
+To add a new set of tags to a subscription, use:
+
+```azurepowershell-interactive
+$tags = @{"CostCenter"="00123"; "Environment"="Dev"}
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+New-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags
+```
+
+To update the tags for a subscription, use:
+
+```azurepowershell-interactive
+$tags = @{"Team"="Web Apps"}
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+Update-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags -Operation Merge
+```
+
+You may have more than one resource with the same name in a resource group. In that case, you can set each resource with the following commands:
+
+```azurepowershell-interactive
+$resource = Get-AzResource -ResourceName sqlDatabase1 -ResourceGroupName examplegroup
+$resource | ForEach-Object { Update-AzTag -Tag @{ "Dept"="IT"; "Environment"="Test" } -ResourceId $_.ResourceId -Operation Merge }
+```
+
+## List tags
+
+To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass the resource ID of the entity.
+
+To see the tags for a resource, use:
+
+```azurepowershell-interactive
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Get-AzTag -ResourceId $resource.id
+```
+
+To see the tags for a resource group, use:
+
+```azurepowershell-interactive
+$resourceGroup = Get-AzResourceGroup -Name demoGroup
+Get-AzTag -ResourceId $resourceGroup.ResourceId
+```
+
+To see the tags for a subscription, use:
+
+```azurepowershell-interactive
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+Get-AzTag -ResourceId "/subscriptions/$subscription"
+```
+
+## List by tag
+
+To get resources that have a specific tag name and value, use:
+
+```azurepowershell-interactive
+(Get-AzResource -Tag @{ "CostCenter"="00123"}).Name
+```
+
+To get resources that have a specific tag name with any tag value, use:
+
+```azurepowershell-interactive
+(Get-AzResource -TagName "Dept").Name
+```
+
+To get resource groups that have a specific tag name and value, use:
+
+```azurepowershell-interactive
+(Get-AzResourceGroup -Tag @{ "CostCenter"="00123" }).ResourceGroupName
+```
+
+## Remove tags
+
+To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
+
+```azurepowershell-interactive
+$removeTags = @{"Project"="ECommerce"; "Team"="Web"}
+$resource = Get-AzResource -Name demostorage -ResourceGroup demoGroup
+Update-AzTag -ResourceId $resource.id -Tag $removeTags -Operation Delete
+```
+
+The specified tags are removed.
+
+```output
+Properties :
+ Name Value
+ ========== =====
+ CostCenter 00123
+```
+
+To remove all tags, use the [Remove-AzTag](/powershell/module/az.resources/remove-aztag) command.
+
+```azurepowershell-interactive
+$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
+Remove-AzTag -ResourceId "/subscriptions/$subscription"
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-python.md
+
+ Title: Tag resources, resource groups, and subscriptions with Python
+description: Shows how to use Python to apply tags to Azure resources.
+ Last updated : 04/19/2023+++
+# Apply tags with Python
+
+This article describes how to use Python to tag resources, resource groups, and subscriptions. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
++
+## Prerequisites
+
+* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+
+* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}`
+ * azure-identity
+ * azure-mgmt-resource
+
+ If you have older versions of these packages already installed in your virtual environment, you may need to update them with `pip install --upgrade {package-name}`
+
+* The examples in this article use CLI-based authentication (`AzureCliCredential`). Depending on your environment, you may need to run `az login` first to authenticate.
+
+* An environment variable with your Azure subscription ID. To get your Azure subscription ID, use:
+
+ ```azurecli-interactive
+ az account show --name 'your subscription name' --query id -o tsv
+ ```
+
+ To set the value, use the option for your environment.
+
+ #### [Windows](#tab/windows)
+
+ ```console
+ setx AZURE_SUBSCRIPTION_ID your-subscription-id
+ ```
+
+ > [!NOTE]
+ > If you only need to access the environment variable in the current running console, you can set the environment variable with `set` instead of `setx`.
+
+ After you add the environment variables, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before running the example.
+
+ #### [Linux](#tab/linux)
+
+ ```bash
+ export AZURE_SUBSCRIPTION_ID=your-subscription-id
+ ```
+
+ After you add the environment variables, run `source ~/.bashrc` from your console window to make the changes effective.
+
+ #### [macOS](#tab/macos)
+
+ ##### Bash
+
+ Edit your .bash_profile, and add the environment variables:
+
+ ```bash
+ export AZURE_SUBSCRIPTION_ID=your-subscription-id
+ ```
+
+ After you add the environment variables, run `source ~/.bash_profile` from your console window to make the changes effective.
+
+## Apply tags
+
+Azure Python offers the [ResourceManagementClient.tags.begin_create_or_update_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-begin-create-or-update-at-scope) method to apply tags. It replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
+
+The following example applies a set of tags to a storage account:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Dept": "Finance",
+ "Status": "Normal"
+}
+
+tag_resource = TagsResource(
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_create_or_update_at_scope(resource.id, tag_resource)
+
+print(f"Tags {tag_resource.properties.tags} were added to resource with ID: {resource.id}")
+```
+
+If you run the command again, but this time with different tags, notice that the earlier tags disappear.
+
+```python
+tags = {
+ "Team": "Compliance",
+ "Environment": "Production"
+}
+```
+
+To add tags to a resource that already has tags, use [ResourceManagementClient.tags.begin_update_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-begin-update-at-scope). On the [TagsPatchResource](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.models.tagspatchresource) object, set the `operation` parameter to `Merge`.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Dept": "Finance",
+ "Status": "Normal"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to existing tags on resource with ID: {resource.id}")
+```
+
+Notice that the existing tags grow with the addition of the two new tags.
+
+Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Status": "Green"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to existing tags on resource with ID: {resource.id}")
+```
+
+When you set the `operation` parameter to `Replace`, the new set of tags replaces the existing tags.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Project": "ECommerce",
+ "CostCenter": "00123",
+ "Team": "Web"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Replace",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} replaced tags on resource with ID: {resource.id}")
+```
+
+Only the new tags remain on the resource.
+
+The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag. To add a new set of tags to a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+
+tags = {
+ "Dept": "Finance",
+ "Status": "Normal"
+}
+
+tag_resource = TagsResource(
+ properties={'tags': tags}
+)
+
+resource_group = resource_client.resource_groups.get(resource_group_name)
+
+resource_client.tags.begin_create_or_update_at_scope(resource_group.id, tag_resource)
+
+print(f"Tags {tag_resource.properties.tags} were added to resource group: {resource_group.id}")
+```
+
+To update the tags for a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+
+tags = {
+ "CostCenter": "00123",
+ "Environment": "Production"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource_group = resource_client.resource_groups.get(resource_group_name)
+
+resource_client.tags.begin_update_at_scope(resource_group.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to existing tags on resource group: {resource_group.id}")
+```
+
+To update the tags for a subscription, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+tags = {
+ "Team": "Web Apps"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resource_client.tags.begin_update_at_scope(f"/subscriptions/{subscription_id}", tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were added to subscription: {subscription_id}")
+
+```
+
+You may have more than one resource with the same name in a resource group. In that case, you can set each resource with the following commands:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+
+tags = {
+ "Dept": "IT",
+ "Environment": "Test"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Merge",
+ properties={'tags': tags}
+)
+
+resources = resource_client.resources.list_by_resource_group(resource_group_name, filter="name eq 'sqlDatabase1'")
+
+for resource in resources:
+ resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+ print(f"Tags {tag_patch_resource.properties.tags} were added to resource: {resource.id}")
+```
+
+## List tags
+
+To get the tags for a resource, resource group, or subscription, use the [ResourceManagementClient.tags.get_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-get-at-scope) method and pass the resource ID of the entity.
+
+To see the tags for a resource, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostorage"
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_tags = resource_client.tags.get_at_scope(resource.id)
+print (resource_tags.properties.tags)
+```
+
+To see the tags for a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group = resource_client.resource_groups.get("demoGroup")
+
+resource_group_tags = resource_client.tags.get_at_scope(resource_group.id)
+print (resource_group_tags.properties.tags)
+```
+
+To see the tags for a subscription, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+subscription_tags = resource_client.tags.get_at_scope(f"/subscriptions/{subscription_id}")
+print (subscription_tags.properties.tags)
+```
+
+## List by tag
+
+To get resources that have a specific tag name and value, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resources = resource_client.resources.list(filter="tagName eq 'CostCenter' and tagValue eq '00123'")
+
+for resource in resources:
+ print(resource.name)
+```
+
+To get resources that have a specific tag name with any tag value, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resources = resource_client.resources.list(filter="tagName eq 'Dept'")
+
+for resource in resources:
+ print(resource.name)
+```
+
+To get resource groups that have a specific tag name and value, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_groups = resource_client.resource_groups.list(filter="tagName eq 'CostCenter' and tagValue eq '00123'")
+
+for resource_group in resource_groups:
+ print(resource_group.name)
+```
+
+## Remove tags
+
+To remove specific tags, set `operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import TagsPatchResource
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+tags = {
+ "Dept": "IT",
+ "Environment": "Test"
+}
+
+tag_patch_resource = TagsPatchResource(
+ operation="Delete",
+ properties={'tags': tags}
+)
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2022-09-01"
+)
+
+resource_client.tags.begin_update_at_scope(resource.id, tag_patch_resource)
+
+print(f"Tags {tag_patch_resource.properties.tags} were removed from resource: {resource.id}")
+```
+
+The specified tags are removed.
+
+To remove all tags, use the [ResourceManagementClient.tags.begin_delete_at_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.tagsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-tagsoperations-begin-delete-at-scope) method.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+subscription = resource_client.subscriptions.get(subscription_id)
+
+resource_client.tags.begin_delete_at_scope(subscription.id)
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-templates.md
+
+ Title: Tag resources, resource groups, and subscriptions with ARM templates
+description: Shows how to use ARM templates to apply tags to Azure resources.
+ Last updated : 04/19/2023++
+# Apply tags with ARM templates
+
+This article describes how to use Azure Resource Manager templates (ARM templates) to tag resources, resource groups, and subscriptions during deployment. For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
+
+> [!NOTE]
+> The tags you apply through an ARM template or Bicep file overwrite any existing tags.
+
+## Apply values
+
+The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "utcShort": {
+ "type": "string",
+ "defaultValue": "[utcNow('d')]"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "LastDeployed": "[parameters('utcShort')]"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply an object
+
+You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "tagValues": {
+ "type": "object",
+ "defaultValue": {
+ "Dept": "Finance",
+ "Environment": "Production"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": "[parameters('tagValues')]",
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply a JSON string
+
+To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply tags from resource group
+
+To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "Dept": "[resourceGroup().tags['Dept']]",
+ "Environment": "[resourceGroup().tags['Environment']]"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+## Apply tags to resource groups or subscriptions
+
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "tagName": {
+ "type": "string",
+ "defaultValue": "TeamName"
+ },
+ "tagValue": {
+ "type": "string",
+ "defaultValue": "AppTeam1"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/tags",
+ "name": "default",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "tags": {
+ "[parameters('tagName')]": "[parameters('tagValue')]"
+ }
+ }
+ }
+ ]
+}
+```
+
+To apply the tags to a resource group, use either Azure PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
+
+```azurepowershell-interactive
+New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+```azurecli-interactive
+az deployment group create --resource-group exampleGroup --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+To apply the tags to a subscription, use either PowerShell or Azure CLI. Deploy to the subscription that you want to tag.
+
+```azurepowershell-interactive
+New-AzSubscriptionDeployment -name tagresourcegroup -Location westus2 -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+```azurecli-interactive
+az deployment sub create --name tagresourcegroup --location westus2 --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
+```
+
+For more information about subscription deployments, see [Create resource groups and resources at the subscription level](../templates/deploy-to-subscription.md).
+
+The following template adds the tags from an object to either a resource group or subscription.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "tags": {
+ "type": "object",
+ "defaultValue": {
+ "TeamName": "AppTeam1",
+ "Dept": "Finance",
+ "Environment": "Production"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/tags",
+ "apiVersion": "2021-04-01",
+ "name": "default",
+ "properties": {
+ "tags": "[parameters('tags')]"
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md).
+* For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For tag recommendations and limitations, see [Use tags to organize your Azure resources and management hierarchy](tag-resources.md).
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization
-description: Shows how to apply tags to organize Azure resources for billing and managing.
+description: Describes the conditions and limitations for using tags with Azure resources.
Previously updated : 05/25/2022- Last updated : 04/19/2023 # Use tags to organize your Azure resources and management hierarchy
-Tags are metadata elements that you apply to your Azure resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named Environment. To identify the resources deployed to production, give them a value of Production. Fully formed, the key-value pair becomes, Environment = Production.
+Tags are metadata elements that you apply to your Azure resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named `Environment`. To identify the resources deployed to production, give them a value of `Production`. The fully-formed key-value pair is `Environment = Production`.
+
+This article describes the conditions and limitations for using tags. For steps on how to work with tags, see:
+
+* [Portal](tag-resources-portal.md)
+* [Azure CLI](tag-resources-cli.md)
+* [Azure PowerShell](tag-resources-powershell.md)
+* [Python](tag-resources-python.md)
+* [ARM templates](tag-resources-templates.md)
+* [Bicep](tag-resources-bicep.md)
+
+## Tag usage and recommendations
You can apply tags to your Azure resources, resource groups, and subscriptions.
There are two ways to get the required access to tag resources.
- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. To apply tags to virtual machines, for example, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
-## PowerShell
-
-### Apply tags
-
-Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
-
-The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
-
-The following example applies a set of tags to a storage account:
-
-```azurepowershell-interactive
-$tags = @{"Dept"="Finance"; "Status"="Normal"}
-$resource = Get-AzResource -Name demoStorage -ResourceGroup demoGroup
-New-AzTag -ResourceId $resource.id -Tag $tags
-```
-
-When the command completes, notice that the resource has two tags.
-
-```output
-Properties :
- Name Value
- ====== =======
- Dept Finance
- Status Normal
-```
-
-If you run the command again, but this time with different tags, notice that the earlier tags disappear.
-
-```azurepowershell-interactive
-$tags = @{"Team"="Compliance"; "Environment"="Production"}
-New-AzTag -ResourceId $resource.id -Tag $tags
-```
-
-```output
-Properties :
- Name Value
- =========== ==========
- Environment Production
- Team Compliance
-```
-
-To add tags to a resource that already has tags, use `Update-AzTag`. Set the `-Operation` parameter to `Merge`.
-
-```azurepowershell-interactive
-$tags = @{"Dept"="Finance"; "Status"="Normal"}
-Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
-```
-
-Notice that the existing tags grow with the addition of the two new tags.
-
-```output
-Properties :
- Name Value
- =========== ==========
- Status Normal
- Dept Finance
- Team Compliance
- Environment Production
-```
-
-Each tag name can have only one value. If you provide a new value for a tag, it replaces the old value even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
-
-```azurepowershell-interactive
-$tags = @{"Status"="Green"}
-Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Merge
-```
-
-```output
-Properties :
- Name Value
- =========== ==========
- Status Green
- Dept Finance
- Team Compliance
- Environment Production
-```
-
-When you set the `-Operation` parameter to `Replace`, the new set of tags replaces the existing tags.
-
-```azurepowershell-interactive
-$tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
-Update-AzTag -ResourceId $resource.id -Tag $tags -Operation Replace
-```
-
-Only the new tags remain on the resource.
-
-```output
-Properties :
- Name Value
- ========== =========
- CostCenter 00123
- Team Web
- Project ECommerce
-```
-
-The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
-
-To add a new set of tags to a resource group, use:
-
-```azurepowershell-interactive
-$tags = @{"Dept"="Finance"; "Status"="Normal"}
-$resourceGroup = Get-AzResourceGroup -Name demoGroup
-New-AzTag -ResourceId $resourceGroup.ResourceId -tag $tags
-```
-
-To update the tags for a resource group, use:
-
-```azurepowershell-interactive
-$tags = @{"CostCenter"="00123"; "Environment"="Production"}
-$resourceGroup = Get-AzResourceGroup -Name demoGroup
-Update-AzTag -ResourceId $resourceGroup.ResourceId -Tag $tags -Operation Merge
-```
-
-To add a new set of tags to a subscription, use:
-
-```azurepowershell-interactive
-$tags = @{"CostCenter"="00123"; "Environment"="Dev"}
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-New-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags
-```
-
-To update the tags for a subscription, use:
-
-```azurepowershell-interactive
-$tags = @{"Team"="Web Apps"}
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-Update-AzTag -ResourceId "/subscriptions/$subscription" -Tag $tags -Operation Merge
-```
-
-You may have more than one resource with the same name in a resource group. In that case, you can set each resource with the following commands:
-
-```azurepowershell-interactive
-$resource = Get-AzResource -ResourceName sqlDatabase1 -ResourceGroupName examplegroup
-$resource | ForEach-Object { Update-AzTag -Tag @{ "Dept"="IT"; "Environment"="Test" } -ResourceId $_.ResourceId -Operation Merge }
-```
-
-### List tags
-
-To get the tags for a resource, resource group, or subscription, use the [Get-AzTag](/powershell/module/az.resources/get-aztag) command and pass the resource ID of the entity.
-
-To see the tags for a resource, use:
-
-```azurepowershell-interactive
-$resource = Get-AzResource -Name demoStorage -ResourceGroup demoGroup
-Get-AzTag -ResourceId $resource.id
-```
-
-To see the tags for a resource group, use:
-
-```azurepowershell-interactive
-$resourceGroup = Get-AzResourceGroup -Name demoGroup
-Get-AzTag -ResourceId $resourceGroup.ResourceId
-```
-
-To see the tags for a subscription, use:
-
-```azurepowershell-interactive
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-Get-AzTag -ResourceId "/subscriptions/$subscription"
-```
-
-### List by tag
-
-To get resources that have a specific tag name and value, use:
-
-```azurepowershell-interactive
-(Get-AzResource -Tag @{ "CostCenter"="00123"}).Name
-```
-
-To get resources that have a specific tag name with any tag value, use:
-
-```azurepowershell-interactive
-(Get-AzResource -TagName "Dept").Name
-```
-
-To get resource groups that have a specific tag name and value, use:
-
-```azurepowershell-interactive
-(Get-AzResourceGroup -Tag @{ "CostCenter"="00123" }).ResourceGroupName
-```
-
-### Remove tags
-
-To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass the resource IDs of the tags you want to delete.
-
-```azurepowershell-interactive
-$removeTags = @{"Project"="ECommerce"; "Team"="Web"}
-Update-AzTag -ResourceId $resource.id -Tag $removeTags -Operation Delete
-```
-
-The specified tags are removed.
-
-```output
-Properties :
- Name Value
- ========== =====
- CostCenter 00123
-```
-
-To remove all tags, use the [Remove-AzTag](/powershell/module/az.resources/remove-aztag) command.
-
-```azurepowershell-interactive
-$subscription = (Get-AzSubscription -SubscriptionName "Example Subscription").Id
-Remove-AzTag -ResourceId "/subscriptions/$subscription"
-```
-
-## Azure CLI
-
-### Apply tags
-
-Azure CLI offers two commands to apply tags: [az tag create](/cli/azure/tag#az-tag-create) and [az tag update](/cli/azure/tag#az-tag-update). You need to have the Azure CLI 2.10.0 version or later. You can check your version with `az version`. To update or install it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-The `az tag create` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
-
-The following example applies a set of tags to a storage account:
-
-```azurecli-interactive
-resource=$(az resource show -g demoGroup -n demoStorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
-az tag create --resource-id $resource --tags Dept=Finance Status=Normal
-```
-
-When the command completes, notice that the resource has two tags.
-
-```output
-"properties": {
- "tags": {
- "Dept": "Finance",
- "Status": "Normal"
- }
-},
-```
-
-If you run the command again, but this time with different tags, notice that the earlier tags disappear.
-
-```azurecli-interactive
-az tag create --resource-id $resource --tags Team=Compliance Environment=Production
-```
-
-```output
-"properties": {
- "tags": {
- "Environment": "Production",
- "Team": "Compliance"
- }
-},
-```
-
-To add tags to a resource that already has tags, use `az tag update`. Set the `--operation` parameter to `Merge`.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Merge --tags Dept=Finance Status=Normal
-```
-
-Notice that the existing tags grow with the addition of the two new tags.
-
-```output
-"properties": {
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "Status": "Normal",
- "Team": "Compliance"
- }
-},
-```
-
-Each tag name can have only one value. If you provide a new value for a tag, the new tag replaces the old value, even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Merge --tags Status=Green
-```
-
-```output
-"properties": {
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "Status": "Green",
- "Team": "Compliance"
- }
-},
-```
-
-When you set the `--operation` parameter to `Replace`, the new set of tags replaces the existing tags.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Replace --tags Project=ECommerce CostCenter=00123 Team=Web
-```
-
-Only the new tags remain on the resource.
-
-```output
-"properties": {
- "tags": {
- "CostCenter": "00123",
- "Project": "ECommerce",
- "Team": "Web"
- }
-},
-```
-
-The same commands also work with resource groups or subscriptions. Pass them in the identifier of the resource group or subscription you want to tag.
-
-To add a new set of tags to a resource group, use:
-
-```azurecli-interactive
-group=$(az group show -n demoGroup --query id --output tsv)
-az tag create --resource-id $group --tags Dept=Finance Status=Normal
-```
-
-To update the tags for a resource group, use:
-
-```azurecli-interactive
-az tag update --resource-id $group --operation Merge --tags CostCenter=00123 Environment=Production
-```
-
-To add a new set of tags to a subscription, use:
-
-```azurecli-interactive
-sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
-az tag create --resource-id /subscriptions/$sub --tags CostCenter=00123 Environment=Dev
-```
-
-To update the tags for a subscription, use:
-
-```azurecli-interactive
-az tag update --resource-id /subscriptions/$sub --operation Merge --tags Team="Web Apps"
-```
-
-### List tags
-
-To get the tags for a resource, resource group, or subscription, use the [az tag list](/cli/azure/tag#az-tag-list) command and pass the resource ID of the entity.
-
-To see the tags for a resource, use:
-
-```azurecli-interactive
-resource=$(az resource show -g demoGroup -n demoStorage --resource-type Microsoft.Storage/storageAccounts --query "id" --output tsv)
-az tag list --resource-id $resource
-```
-
-To see the tags for a resource group, use:
-
-```azurecli-interactive
-group=$(az group show -n demoGroup --query id --output tsv)
-az tag list --resource-id $group
-```
-
-To see the tags for a subscription, use:
-
-```azurecli-interactive
-sub=$(az account show --subscription "Demo Subscription" --query id --output tsv)
-az tag list --resource-id /subscriptions/$sub
-```
-
-### List by tag
-
-To get resources that have a specific tag name and value, use:
-
-```azurecli-interactive
-az resource list --tag CostCenter=00123 --query [].name
-```
-
-To get resources that have a specific tag name with any tag value, use:
-
-```azurecli-interactive
-az resource list --tag Team --query [].name
-```
-
-To get resource groups that have a specific tag name and value, use:
-
-```azurecli-interactive
-az group list --tag Dept=Finance
-```
-
-### Remove tags
-
-To remove specific tags, use `az tag update` and set `--operation` to `Delete`. Pass the resource ID of the tags you want to delete.
-
-```azurecli-interactive
-az tag update --resource-id $resource --operation Delete --tags Project=ECommerce Team=Web
-```
-
-You've removed the specified tags.
-
-```output
-"properties": {
- "tags": {
- "CostCenter": "00123"
- }
-},
-```
-
-To remove all tags, use the [az tag delete](/cli/azure/tag#az-tag-delete) command.
-
-```azurecli-interactive
-az tag delete --resource-id $resource
-```
-
-### Handling spaces
-
-If your tag names or values include spaces, enclose them in quotation marks.
-
-```azurecli-interactive
-az tag update --resource-id $group --operation Merge --tags "Cost Center"=Finance-1222 Location="West US"
-```
-
-## ARM templates
-
-You can tag resources, resource groups, and subscriptions during deployment with an ARM template.
-
-> [!NOTE]
-> The tags you apply through an ARM template or Bicep file overwrite any existing tags.
-
-### Apply values
-
-The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "utcShort": {
- "type": "string",
- "defaultValue": "[utcNow('d')]"
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "LastDeployed": "[parameters('utcShort')]"
- },
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-param utcShort string = utcNow('d')
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {
- Dept: 'Finance'
- Environment: 'Production'
- LastDeployed: utcShort
- }
-}
-```
---
-### Apply an object
-
-You can define an object parameter that stores several tags and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that's applied to the tag element.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- },
- "tagValues": {
- "type": "object",
- "defaultValue": {
- "Dept": "Finance",
- "Environment": "Production"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": "[parameters('tagValues')]",
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-param tagValues object = {
- Dept: 'Finance'
- Environment: 'Production'
-}
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: tagValues
-}
-```
---
-### Apply a JSON string
-
-To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": {
- "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
- },
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {
- CostCenter: '{"Dept":"Finance","Environment":"Production"}'
- }
-}
-```
---
-### Apply tags from resource group
-
-To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When you get the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "tags": {
- "Dept": "[resourceGroup().tags['Dept']]",
- "Environment": "[resourceGroup().tags['Environment']]"
- },
- "properties": {}
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param location string = resourceGroup().location
-
-resource stgAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
- name: 'storage${uniqueString(resourceGroup().id)}'
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- tags: {
- Dept: resourceGroup().tags['Dept']
- Environment: resourceGroup().tags['Environment']
- }
-}
-```
---
-### Apply tags to resource groups or subscriptions
-
-You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. You can apply the tags to the target resource group or subscription you want to deploy. Each time you deploy the template you replace any previous tags.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "tagName": {
- "type": "string",
- "defaultValue": "TeamName"
- },
- "tagValue": {
- "type": "string",
- "defaultValue": "AppTeam1"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Resources/tags",
- "name": "default",
- "apiVersion": "2021-04-01",
- "properties": {
- "tags": {
- "[parameters('tagName')]": "[parameters('tagValue')]"
- }
- }
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-param tagName string = 'TeamName'
-param tagValue string = 'AppTeam1'
-
-resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
- name: 'default'
- properties: {
- tags: {
- '${tagName}': tagValue
- }
- }
-}
-```
---
-To apply the tags to a resource group, use either Azure PowerShell or Azure CLI. Deploy to the resource group that you want to tag.
-
-```azurepowershell-interactive
-New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-```azurecli-interactive
-az deployment group create --resource-group exampleGroup --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-To apply the tags to a subscription, use either PowerShell or Azure CLI. Deploy to the subscription that you want to tag.
-
-```azurepowershell-interactive
-New-AzSubscriptionDeployment -name tagresourcegroup -Location westus2 -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-```azurecli-interactive
-az deployment sub create --name tagresourcegroup --location westus2 --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/tags.json
-```
-
-For more information about subscription deployments, see [Create resource groups and resources at the subscription level](../templates/deploy-to-subscription.md).
-
-The following template adds the tags from an object to either a resource group or subscription.
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "tags": {
- "type": "object",
- "defaultValue": {
- "TeamName": "AppTeam1",
- "Dept": "Finance",
- "Environment": "Production"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Resources/tags",
- "apiVersion": "2021-04-01",
- "name": "default",
- "properties": {
- "tags": "[parameters('tags')]"
- }
- }
- ]
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```Bicep
-targetScope = 'subscription'
-
-param tagObject object = {
- TeamName: 'AppTeam1'
- Dept: 'Finance'
- Environment: 'Production'
-}
-
-resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
- name: 'default'
- properties: {
- tags: tagObject
- }
-}
-```
---
-## Portal
--
-## REST API
-
-To work with tags through the Azure REST API, use:
-
-* [Tags - Create Or Update At Scope](/rest/api/resources/tags/createorupdateatscope) (PUT operation)
-* [Tags - Update At Scope](/rest/api/resources/tags/updateatscope) (PATCH operation)
-* [Tags - Get At Scope](/rest/api/resources/tags/getatscope) (GET operation)
-* [Tags - Delete At Scope](/rest/api/resources/tags/deleteatscope) (DELETE operation)
-
-## SDKs
-
-For examples of applying tags with SDKs, see:
-
-* [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/resourcemanager/Azure.ResourceManager/samples/Sample2_ManagingResourceGroups.md)
-* [Java](https://github.com/Azure-Samples/resources-java-manage-resource-group/blob/master/src/main/java/com/azure/resourcemanager/resources/samples/ManageResourceGroup.java)
-* [JavaScript](https://github.com/Azure-Samples/azure-sdk-for-js-samples/blob/main/samples/resources/resources_example.ts)
-* [Python](https://github.com/MicrosoftDocs/samples/tree/main/Azure-Samples/azure-samples-python-management/resources)
- ## Inherit tags Resources don't inherit the tags you apply to a resource group or a subscription. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
The following limitations apply to tags:
* Not all resource types support tags. To determine if you can apply a tag to a resource type, see [Tag support for Azure resources](tag-support.md). * For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+* For steps on how to work with tags, see:
+
+ * [Portal](tag-resources-portal.md)
+ * [Azure CLI](tag-resources-cli.md)
+ * [Azure PowerShell](tag-resources-powershell.md)
+ * [Python](tag-resources-python.md)
+ * [ARM templates](tag-resources-templates.md)
+ * [Bicep](tag-resources-bicep.md)
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-declaration.md
For more information, see [Set resource location in ARM template](resource-locat
## Set tags
-You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources.md#arm-templates).
+You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources-templates.md).
## Set resource-specific properties
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
A common use of the resourceGroup function is to create resources in the same lo
} ```
-You can also use the `resourceGroup` function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources.md#apply-tags-from-resource-group).
+You can also use the `resourceGroup` function to apply tags from the resource group to a resource. For more information, see [Apply tags from resource group](../management/tag-resources-templates.md#apply-tags-from-resource-group).
When using nested templates to deploy to multiple resource groups, you can specify the scope for evaluating the `resourceGroup` function. For more information, see [Deploy Azure resources to more than one subscription or resource group](./deploy-to-resource-group.md).
azure-sql-edge Data Retention Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-cleanup.md
Data Retention can enabled on the database and any of the underlying tables individually, allowing users to create flexible aging policies for their tables and databases. Applying data retention is simple: it requires only one parameter to be set during table creation or as part of an alter table operation.
-After data retention policy is defiend for a database and the underlying table, a background time timer task runs to remove any obsolete records from the table enabled for data retention. Identification of matching rows and their removal from the table occur transparently, in the background task that is scheduled and run by the system. Age condition for the table rows is checked based on the column used as the `filter_column` in the table definition. If retention period, for example, is set to one week, table rows eligible for cleanup satisfy either of the following condition:
+After data retention policy is defined for a database and the underlying table, a background timer task runs to remove any obsolete records from the table enabled for data retention. Identification of matching rows and their removal from the table occur transparently, in the background task that is scheduled and run by the system. Age condition for the table rows is checked based on the column used as the `filter_column` in the table definition. If retention period, for example, is set to one week, table rows eligible for cleanup satisfy either of the following condition:
- If the filter column uses DATETIMEOFFSET data type then the condition is `filter_column < DATEADD(WEEK, -1, SYSUTCDATETIME())` - Else then the condition is `filter_column < DATEADD(WEEK, -1, SYSDATETIME())`
Data retention cleanup operation comprises of two phases.
- Discovery Phase - In this phase the cleanup operation identifies all the tables within the user databases to build a list for cleanup. Discovery runs once a day. - Cleanup Phase - In this phase, cleanup is run against all tables with finite data retention, identified in the discovery phase. If the cleanup operation cannot be performed on a table, then that table is skipped in the current run and will be retried in the next iteration. The following principles are used during cleanup - If an obsolete row is locked by another transaction, that row is skipped.
- - Clean up runs with a default 5 seconds lock timeout setting. If the locks cannot be acquired on the tables within the timeout window, the table is skipped in the current run and will be retried in the next iteration.
+ - Cleanup runs with a default 5 seconds lock timeout setting. If the locks cannot be acquired on the tables within the timeout window, the table is skipped in the current run and will be retried in the next iteration.
- If there is an error during cleanup of a table, that table is skipped and will be picked up in the next iteration. ## Manual cleanup
Additionally, a new ring buffer type named `RING_BUFFER_DATA_RETENTION_CLEANUP`
## Next Steps - [Data Retention Policy](data-retention-overview.md)-- [Enable and Disable Data Retention Policies](data-retention-enable-disable.md)
+- [Enable and Disable Data Retention Policies](data-retention-enable-disable.md)
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 4/6/2023 Last updated : 4/20/2023 # Known issues: Azure VMware Solution
Refer to the table below to find details about resolution dates or possible work
| :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
-| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
+| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
| When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 3/16/2023 Last updated : 4/20/2023 # What's new in Azure VMware Solution
Microsoft will regularly apply important updates to the Azure VMware Solution fo
## April 2023
-**HCX Run commands**
+**VMware HCX Run Commands**
-Introducing run commands for HCX on Azure VMware solutions. You can use these run commands to restart HCX cloud manager in your Azure VMware solution private cloud. Additionally, you can also scale HCX cloud manager using run commands. To learn how to use run commands for HCX, see [Use HCX Run commands](use-hcx-run-commands.md).
+Introducing Run Commands for VMware HCX on Azure VMware Solution. You can use these run commands to restart VMware HCX Cloud Manager in your Azure VMware Solution private cloud. Additionally, you can also scale VMware HCX Cloud Manager using Run Commands. To learn how to use run commands for VMware HCX, see [Use VMware HCX Run commands](use-hcx-run-commands.md).
## February 2023
The data in Azure Log Analytics offer insights into issues by searching using Ku
**New SKU availability - AV36P and AV52 nodes**
-The AV36P is now available in the West US Region.ΓÇ» This node size is used for memory and storage workloads by offering increased Memory and NVME based SSDs.ΓÇ»
+The AV36P is now available in the West US Region. This node size is used for memory and storage workloads by offering increased Memory and NVME based SSDs.ΓÇ»
AV52 is now available in the East US 2 Region. This node size is used for intensive workloads with higher physical core count, additional memory, and larger capacity NVME based SSDs. **Customer-managed keys using Azure Key Vault**
-You can use customer-managed keys to bring and manage your master encryption keys to encrypt van. Azure Key Vault allows you to store your privately managed keys securely to access your Azure VMware Solution data.
+You can use customer-managed keys to bring and manage your master encryption keys to encrypt vSAN. Azure Key Vault allows you to store your privately managed keys securely to access your Azure VMware Solution data.
**Azure NetApp Files - more storage options available**
For pricing and region availability, see the [Azure VMware Solution pricing page
## July 2022
-HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
-HCX with public IP is especially useful in cases where On-premises sites aren't connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
+VMware HCX Cloud Manager in Azure VMware Solution can now be accessible over a public IP address. You can pair VMware HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
+
+VMware HCX with public IP is especially useful in cases where On-premises sites aren't connected to Azure via ExpressRoute or VPN. VMware HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable VMware HCX over the internet](./enable-hcx-access-over-internet.md)
All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
Any existing private clouds in the above mentioned regions will also be upgraded
## May 2022
-All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
No further action is required.
## December 2021
-Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
+Azure VMware Solution has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these Azure VMware Solution managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
- We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
+We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
- If you need any assistance or have questions, [contact us](https://portal.azure.com/#home).
+If you need any assistance or have questions, [contact us](https://portal.azure.com/#home).
-VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We're addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available.
+VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We're addressing this issue by applying VMware recommended workarounds or patches for Azure VMware Solution managed VMware components as they become available.
- Note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for other VMware products you may have deployed in Azure VMware Solution. If you need any assistance or have questions, [contact us](https://portal.azure.com).
+Note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for other VMware products you may have deployed in Azure VMware Solution. If you need any assistance or have questions, [contact us](https://portal.azure.com).
## November 2021
No further action is required.
Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware. To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o. For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html). No further action is required.
-All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523). ESXi hosts in existing private clouds have been patched to this version. For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
+All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523). ESXi hosts in existing private clouds have been patched to this version. For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
## July 2021
-All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
+All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version 3.1.1. NSX-T Data Center version in existing private clouds will be upgraded through September 2021 to NSX-T Data Center 3.1.1 release.
You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
azure-vmware Use Hcx Run Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/use-hcx-run-commands.md
Title: Use HCX Run Commands
-description: Use HCX Run Commands in Azure VMware Solution
+ Title: Use VMware HCX Run Commands
+description: Use VMware HCX Run Commands in Azure VMware Solution
Previously updated : 04/11/2023 Last updated : 04/20/2023
-# Use HCX Run Commands
-In this article, you learn how to use HCX run commands. Use run commands to perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets. This document outlines the available HCX run commands and how to use them.
+# Use VMware HCX Run Commands
+In this article, you learn how to use VMware HCX Run Commands. Use run commands to perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets. This document outlines the available VMware HCX Run Commands and how to use them.
-This article describes two HCX commands: **Restart HCX Manager** and **Scale HCX Manager**.
+This article describes two VMware HCX commands: **Restart HCX Manager** and **Scale HCX Manager**.
-## Restart HCX Manager
+## Restart VMware HCX Manager
-This Command checks for active HCX migrations and replications. If none are found, it restarts the HCX cloud manager (HCX VM's guest OS).
+This Command checks for active VMware HCX migrations and replications. If none are found, it restarts the VMware HCX Cloud Manager (VMware HCX VM's guest OS).
-1. Navigate to the run Command panel in an Azure VMware private cloud on the Azure portal.
+1. Navigate to the Run Command panel in an Azure VMware Solution private cloud on the Azure portal.
:::image type="content" source="media/hcx-commands/run-command-private-cloud.png" alt-text="Diagram that lists all available Run command packages and Run commands." border="false" lightbox="media/hcx-commands/run-command-private-cloud.png":::
Optional run command parameters.
**Force Parameter** - If there are ANY active HCX migrations/replications, this parameter avoids the check for active HCX migrations/replications. If the Virtual machine is in a powered off state, this parameter powers the machine on. **Scenario 1**: A customer has a migration that has been stuck in an active state for weeks and they need a restart of HCX for a separate issue. Without this parameter, the script will fail due to the detection of the active migration.
- **Scenario 2**: The HCX Manager is powered off and the customer would like to power it back on.
+ **Scenario 2**: The VMware HCX Cloud Manager is powered off and the customer would like to power it back on.
:::image type="content" source="media/hcx-commands/restart-command.png" alt-text="Diagram that shows run command parameters for Restart-HcxManager command." border="false" lightbox="media/hcx-commands/restart-command.png":::
-1. Wait for command to finish. It may take few minutes for the HCX appliance to come online.
+1. Wait for command to finish. It may take few minutes for the VMware HCX appliance to come online.
-## Scale HCX manager
-Use the Scale HCX manager run command to increase the resource allocation of your HCX Manager virtual machine to 8 vCPUs and 24-GB RAM from the default setting of 4 vCPUs and 12-GB RAM, ensuring scalability.
+## Scale VMware HCX manager
+Use the Scale VMware HCX Cloud Manager Run Command to increase the resource allocation of your VMware HCX Cloud Manager virtual machine to 8 vCPUs and 24-GB RAM from the default setting of 4 vCPUs and 12-GB RAM, ensuring scalability.
-**Scenario**: Mobility Optimize Networking (MON) requires HCX Scalability. For more details on [MON scaling](https://kb.vmware.com/s/article/88401)ΓÇ»
+**Scenario**: Mobility Optimize Networking (MON) requires VMware HCX Scalability. For more details on [MON scaling](https://kb.vmware.com/s/article/88401)ΓÇ»
>[!NOTE]
-> HCX cloud manager will be rebooted during this operation, and this may affect any ongoing migration processes.
+> VMware HCX Cloud Manager will be rebooted during this operation, and this may affect any ongoing migration processes.
-1. Navigate to the run Command panel on in an AVS private cloud on the Azure portal.
+1. Navigate to the Run Command panel on in an Azure VMware Solution private cloud on the Azure portal.
1. Select the **Microsoft.AVS.Management** package dropdown menu and select the ``Set-HcxScaledCpuAndMemorySetting`` command. :::image type="content" source="media/hcx-commands/set-hcx-scale.png" alt-text="Diagram that shows run command parameters for Set-HcxScaledCpuAndMemorySetting command." border="false" lightbox="media/hcx-commands/set-hcx-scale.png":::
-1. Agree to restart HCX by toggling ``AgreeToRestartHCX`` to **True**.
+1. Agree to restart VMware HCX by toggling ``AgreeToRestartHCX`` to **True**.
You must acknowledge that the virtual machine will be restarted.
Use the Scale HCX manager run command to increase the resource allocation of you
This process may take between 10-15 minutes. >[!NOTE]
- > HCX cloud manager will be unavailable during the scaling.
+ > VMware HCX cloud manager will be unavailable during the scaling.
## Next step
-To learn more about run commands, see [Run commands](concepts-run-command.md)
+To learn more about Run Commands, see [Run Commands](concepts-run-command.md)
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
Title: Quickstart - Run your first Batch job with the Azure CLI
-description: This quickstart shows how to create a Batch account and run a Batch job with the Azure CLI.
+ Title: 'Quickstart: Use the Azure CLI to create a Batch account and run a job'
+description: Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.
Previously updated : 05/25/2021 Last updated : 04/12/2023
-# Quickstart: Run your first Batch job with the Azure CLI
+# Quickstart: Use the Azure CLI to create a Batch account and run a job
-Get started with Azure Batch by using the Azure CLI to create a Batch account, a pool of compute nodes (virtual machines), and a job that runs tasks on the pool. Each sample task runs a basic command on one of the pool nodes.
+This quickstart shows you how to get started with Azure Batch by using Azure CLI commands and scripts to create and manage Batch resources. You create a Batch account that has a pool of virtual machines, or compute nodes. You then create and run a job with tasks that run on the pool nodes.
-The Azure CLI is used to create and manage Azure resources from the command line or in scripts. After completing this quickstart, you will understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-- This quickstart requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- Azure Cloud Shell or Azure CLI.
-## Create a resource group
+ You can run the Azure CLI commands in this quickstart interactively in Azure Cloud Shell. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also [run Cloud Shell from within the Azure portal](https://shell.azure.com). Cloud Shell always uses the latest version of the Azure CLI.
+
+ Alternatively, you can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. The steps in this article require Azure CLI version 2.0.20 or later. Run [az version](/cli/azure/reference-index?#az-version) to see your installed version and dependent libraries, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade. If you use a local installation, sign in to Azure by using the [az login](/cli/azure/reference-index#az-login) command.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+>[!NOTE]
+>For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
+
+## Create a resource group
-The following example creates a resource group named *QuickstartBatch-rg* in the *eastus2* location.
+Run the following [az group create](/cli/azure/group#az-group-create) command to create an Azure resource group named `qsBatch` in the `eastus2` Azure region. The resource group is a logical container that holds the Azure resources for this quickstart.
```azurecli-interactive az group create \
- --name QuickstartBatch-rg \
+ --name qsBatch \
--location eastus2 ``` ## Create a storage account
-You can link an Azure Storage account with your Batch account. Although not required for this quickstart, the storage account is useful to deploy applications and store input and output data for most real-world workloads. Create a storage account in your resource group with the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command.
+Use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command to create an Azure Storage account to link to your Batch account. Although this quickstart doesn't use the storage account, most real-world Batch workloads use a linked storage account to deploy applications and store input and output data.
+
+Run the following command to create a Standard_LRS SKU storage account named `mybatchstorage` in your resource group:
```azurecli-interactive az storage account create \
- --resource-group QuickstartBatch-rg \
- --name mystorageaccount \
+ --resource-group qsBatch \
+ --name mybatchstorage \
--location eastus2 \ --sku Standard_LRS ``` ## Create a Batch account
-Create a Batch account with the [az batch account create](/cli/azure/batch/account#az-batch-account-create) command. You need an account to create compute resources (pools of compute nodes) and Batch jobs.
-
-The following example creates a Batch account named *mybatchaccount* in *QuickstartBatch-rg*, and links the storage account you created.
+Run the following [az batch account create](/cli/azure/batch/account#az-batch-account-create) command to create a Batch account named `mybatchaccount` in your resource group and link it with the `mybatchstorage` storage account.
```azurecli-interactive az batch account create \ --name mybatchaccount \
- --storage-account mystorageaccount \
- --resource-group QuickstartBatch-rg \
+ --storage-account mybatchstorage \
+ --resource-group qsBatch \
--location eastus2 ```
-To create and manage compute pools and jobs, you need to authenticate with Batch. Log in to the account with the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command. After you log in, your `az batch` commands use this account context.
+Sign in to the new Batch account by running the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command. Once you authenticate your account with Batch, subsequent `az batch` commands in this session use this account context.
```azurecli-interactive az batch account login \ --name mybatchaccount \
- --resource-group QuickstartBatch-rg \
+ --resource-group qsBatch \
--shared-key-auth ``` ## Create a pool of compute nodes
-Now that you have a Batch account, create a sample pool of Linux compute nodes using the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command. The following example creates a pool named *mypool* of two *Standard_A1_v2* nodes running Ubuntu 18.04 LTS. The suggested node size offers a good balance of performance versus cost for this quick example.
+Run the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command to create a pool of Linux compute nodes in your Batch account. The following example creates a pool named `myPool` that consists of two Standard_A1_v2 size VMs running Ubuntu 20.04 LTS OS. This node size offers a good balance of performance versus cost for this quickstart example.
```azurecli-interactive az batch pool create \
- --id mypool --vm-size Standard_A1_v2 \
+ --id myPool \
+ --image canonical:0001-com-ubuntu-server-focal:20_04-lts \
+ --node-agent-sku-id "batch.node.ubuntu 20.04" \
--target-dedicated-nodes 2 \
- --image canonical:ubuntuserver:18.04-LTS \
- --node-agent-sku-id "batch.node.ubuntu 18.04"
+ --vm-size Standard_A1_v2
```
-Batch creates the pool immediately, but it takes a few minutes to allocate and start the compute nodes. During this time, the pool is in the `resizing` state. To see the status of the pool, run the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. This command shows all the properties of the pool, and you can query for specific properties. The following command gets the allocation state of the pool:
+Batch creates the pool immediately, but takes a few minutes to allocate and start the compute nodes. To see the pool status, use the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. This command shows all the properties of the pool, and you can query for specific properties. The following command queries for the pool allocation state:
```azurecli-interactive
-az batch pool show --pool-id mypool \
+az batch pool show --pool-id myPool \
--query "allocationState" ```
-Continue the following steps to create a job and tasks while the pool state is changing. The pool is ready to run tasks when the allocation state is `steady` and all the nodes are running.
+While Batch allocates and starts the nodes, the pool is in the `resizing` state. You can create a job and tasks while the pool state is still `resizing`. The pool is ready to run tasks when the allocation state is `steady` and all the nodes are running.
## Create a job
-Now that you have a pool, create a job to run on it. A Batch job is a logical group for one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. Create a Batch job by using the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command. The following example creates a job *myjob* on the pool *mypool*. Initially the job has no tasks.
+Use the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command to create a Batch job to run on your pool. A Batch job is a logical group of one or more tasks. The job includes settings common to the tasks, such as the pool to run on. The following example creates a job called `myJob` on `myPool` that initially has no tasks.
```azurecli-interactive az batch job create \
- --id myjob \
- --pool-id mypool
+ --id myJob \
+ --pool-id myPool
```
-## Create tasks
+## Create job tasks
-Now use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create some tasks to run in the job. In this example, you create four identical tasks. Each task runs a `command-line` to display the Batch environment variables on a compute node, and then waits 90 seconds. When you use Batch, this command line is where you specify your app or script. Batch provides several ways to deploy apps and scripts to compute nodes.
+Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script.
-The following Bash script creates four parallel tasks (*mytask1* to *mytask4*).
+The following Bash script creates four identical, parallel tasks called `myTask1` through `myTask4`. The task command line displays the Batch environment variables on the compute node, and then waits 90 seconds.
```azurecli-interactive for i in {1..4} do az batch task create \
- --task-id mytask$i \
- --job-id myjob \
+ --task-id myTask$i \
+ --job-id myJob \
--command-line "/bin/bash -c 'printenv | grep AZ_BATCH; sleep 90s'" done ```
-The command output shows settings for each of the tasks. Batch distributes the tasks to the compute nodes.
+The command output shows the settings for each task. Batch distributes the tasks to the compute nodes.
## View task status
-After you create a task, Batch queues it to run on the pool. Once a node is available to run it, the task runs.
+After you create the task, Batch queues the task to run on the pool. Once a node is available, the task runs on the node.
-Use the [az batch task show](/cli/azure/batch/task#az-batch-task-show) command to view the status of the Batch tasks. The following example shows details about *mytask1* running on one of the pool nodes.
+Use the [az batch task show](/cli/azure/batch/task#az-batch-task-show) command to view the status of Batch tasks. The following example shows details about the status of `myTask1`:
```azurecli-interactive az batch task show \
- --job-id myjob \
- --task-id mytask1
+ --job-id myJob \
+ --task-id myTask1
```
-The command output includes many details, but take note of the `exitCode` of the task command line and the `nodeId`. An `exitCode` of 0 indicates that the task command line completed successfully. The `nodeId` indicates the ID of the pool node on which the task ran.
+The command output includes many details. For example, an `exitCode` of `0` indicates that the task command completed successfully. The `nodeId` shows the name of the pool node that ran the task.
## View task output
-To list the files created by a task on a compute node, use the [az batch task file list](/cli/azure/batch/task) command. The following command lists the files created by *mytask1*:
+Use the [az batch task file list](/cli/azure/batch/task#az-batch-task-file-show) command to list the files a task created on a node. The following command lists the files that `myTask1` created:
```azurecli-interactive az batch task file list \
- --job-id myjob \
- --task-id mytask1 \
+ --job-id myJob \
+ --task-id myTask1 \
--output table ```
-Output is similar to the following:
+Results are similar to the following output:
-```
-Name URL Is Directory Content Length
-- -- -
-stdout.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/stdout.txt False 695
-certs https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/certs True
-wd https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/wd True
-stderr.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myjob/tasks/mytask1/files/stderr.txt False 0
+```output
+Name URL Is Directory Content Length
+- - -- -
+stdout.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/stdout.txt False 695
+certs https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/certs True
+wd https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/wd True
+stderr.txt https://mybatchaccount.eastus2.batch.azure.com/jobs/myJob/tasks/myTask1/files/stderr.txt False 0
```
-To download one of the output files to a local directory, use the [az batch task file download](/cli/azure/batch/task) command. In this example, task output is in `stdout.txt`.
+The [az batch task file download](/cli/azure/batch/task#az-batch-task-file-download) command downloads output files to a local directory. Run the following example to download the *stdout.txt* file:
```azurecli-interactive az batch task file download \
- --job-id myjob \
- --task-id mytask1 \
+ --job-id myJob \
+ --task-id myTask1 \
--file-path stdout.txt \ --destination ./stdout.txt ```
-You can view the contents of `stdout.txt` in a text editor. The contents show the Azure Batch environment variables that are set on the node. When you create your own Batch jobs, you can reference these environment variables in task command lines, and in the apps and scripts run by the command lines. For example:
+You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run.
-```
-AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myjob/job-1/mytask1
+```text
+AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1
AZ_BATCH_NODE_STARTUP_DIR=/mnt/batch/tasks/startup
-AZ_BATCH_CERTIFICATES_DIR=/mnt/batch/tasks/workitems/myjob/job-1/mytask1/certs
+AZ_BATCH_CERTIFICATES_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1/certs
AZ_BATCH_ACCOUNT_URL=https://mybatchaccount.eastus2.batch.azure.com/
-AZ_BATCH_TASK_WORKING_DIR=/mnt/batch/tasks/workitems/myjob/job-1/mytask1/wd
+AZ_BATCH_TASK_WORKING_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1/wd
AZ_BATCH_NODE_SHARED_DIR=/mnt/batch/tasks/shared AZ_BATCH_TASK_USER=_azbatch AZ_BATCH_NODE_ROOT_DIR=/mnt/batch/tasks
-AZ_BATCH_JOB_ID=myjobl
+AZ_BATCH_JOB_ID=myJobl
AZ_BATCH_NODE_IS_DEDICATED=true AZ_BATCH_NODE_ID=tvm-257509324_2-20180703t215033z
-AZ_BATCH_POOL_ID=mypool
-AZ_BATCH_TASK_ID=mytask1
+AZ_BATCH_POOL_ID=myPool
+AZ_BATCH_TASK_ID=myTask1
AZ_BATCH_ACCOUNT_NAME=mybatchaccount AZ_BATCH_TASK_USER_IDENTITY=PoolNonAdmin ``` ## Clean up resources
-If you want to continue with Batch tutorials and samples, use the Batch account and linked storage account created in this quickstart. There is no charge for the Batch account itself.
+If you want to continue with Batch tutorials and samples, you can use the Batch account and linked storage account that you created in this quickstart. There's no charge for the Batch account itself.
-You are charged for pools while the nodes are running, even if no jobs are scheduled. When you no longer need a pool, delete it with the [az batch pool delete](/cli/azure/batch/pool#az-batch-pool-delete) command. When you delete the pool, all task output on the nodes is deleted.
+Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. When you no longer need a pool, use the [az batch pool delete](/cli/azure/batch/pool#az-batch-pool-delete) command to delete it. Deleting a pool deletes all task output on the nodes, and the nodes themselves.
```azurecli-interactive
-az batch pool delete --pool-id mypool
+az batch pool delete --pool-id myPool
```
-When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, Batch account, pools, and all related resources. Delete the resources as follows:
+When you no longer need any of the resources you created for this quickstart, you can use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all its resources. To delete the resource group and the storage account, Batch account, node pools, and all related resources, run the following command:
```azurecli-interactive
-az group delete --name QuickstartBatch-rg
+az group delete --name qsBatch
``` ## Next steps
-In this quickstart, you created a Batch account, a Batch pool, and a Batch job. The job ran sample tasks, and you viewed output created on one of the nodes. Now that you understand the key concepts of the Batch service, you are ready to try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, continue to the Azure Batch tutorials.
+In this quickstart, you created a Batch account and pool, created and ran a Batch job and tasks, and viewed task output from the nodes. Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch, continue to the Azure Batch tutorials.
> [!div class="nextstepaction"]
-> [Azure Batch tutorials](./tutorial-parallel-dotnet.md)
+> [Tutorial: Run a parallel workload with Azure Batch](./tutorial-parallel-python.md)
batch Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-portal.md
Title: Azure Quickstart - Run your first Batch job in the Azure portal
-description: This quickstart shows how to use the Azure portal to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.
Previously updated : 06/22/2022
+ Title: 'Quickstart: Use the Azure portal to create a Batch account and run a job'
+description: Follow this quickstart to use the Azure portal to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool.
Last updated : 04/13/2022
-# Quickstart: Run your first Batch job in the Azure portal
+# Quickstart: Use the Azure portal to create a Batch account and run a job
-Get started with Azure Batch by using the Azure portal to create a Batch account, a pool of compute nodes (virtual machines), and a job that runs tasks on the pool.
+This quickstart shows you how to get started with Azure Batch by using the Azure portal. You create a Batch account that has a pool of virtual machines (VMs), or compute nodes. You then create and run a job with tasks that run on the pool nodes.
-After completing this quickstart, you'll understand the [key concepts of the Batch service](batch-service-workflow-features.md) and be ready to try Batch with more realistic workloads at larger scale.
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-## Create a Batch account
+>[!NOTE]
+>For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
-Follow these steps to create a sample Batch account for test purposes. You need a Batch account to create pools and jobs. You can also link an Azure storage account with the Batch account. Although not required for this quickstart, the storage account is useful to deploy applications and store input and output data for most real-world workloads.
+<a name="create-a-batch-account"></a>
+## Create a Batch account and Azure Storage account
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource**.
+You need a Batch account to create pools and jobs. The following steps create an example Batch account. You also create an Azure Storage account to link to your Batch account. Although this quickstart doesn't use the storage account, most real-world Batch workloads use a linked storage account to deploy applications and store input and output data.
-1. Type "batch service" in the search box, then select **Batch Service**.
+1. Sign in to the [Azure portal](https://portal.azure.com), and search for and select **batch accounts**.
- :::image type="content" source="media/quick-create-portal/marketplace-batch.png" alt-text="Screenshot of Batch Service in the Azure Marketplace.":::
+ :::image type="content" source="media/quick-create-portal/marketplace-batch.png" alt-text="Screenshot of selecting Batch accounts in the Azure portal.":::
-1. Select **Create**.
+1. On the **Batch accounts** page, select **Create**.
-1. In the **Resource group** field, select **Create new** and enter a name for your resource group.
+1. On the **New Batch account** page, enter or select the following values:
-1. Enter a value for **Account name**. This name must be unique within the Azure **Location** selected. It can contain only lowercase letters and numbers, and it must be between 3-24 characters.
+ - Under **Resource group**, select **Create new**, enter the name *qsBatch*, and then select **OK**. The resource group is a logical container that holds the Azure resources for this quickstart.
+ - For **Account name**, enter the name *mybatchaccount*. The Batch account name must be unique within the Azure region you select, can contain only lowercase letters and numbers, and must be between 3-24 characters.
+ - For **Location**, select **East US**.
+ - Under **Storage account**, select the link to **Select a storage account**.
-1. Optionally, under **Storage account**, you can specify a storage account. Click **Select a storage account**, then select an existing storage account or create a new one.
+ :::image type="content" source="media/quick-create-portal/new-batch-account.png" alt-text="Screenshot of the New Batch account page in the Azure portal.":::
-1. Leave the other settings as is. Select **Review + create**, then select **Create** to create the Batch account.
+1. On the **Create storage account** page, under **Name**, enter **mybatchstorage**. Leave the other settings at their defaults, and select **OK**.
-When the **Deployment succeeded** message appears, go to the Batch account that you created.
+1. Select **Review + create** at the bottom of the **New Batch account** page, and when validation passes, select **Create**.
+
+1. When the **Deployment succeeded** message appears, select **Go to resource** to go to the Batch account that you created.
## Create a pool of compute nodes
-Now that you have a Batch account, create a sample pool of Windows compute nodes for test purposes. The pool in this quickstart consists of two nodes running a Windows Server 2019 image from the Azure Marketplace.
+Next, create a pool of Windows compute nodes in your Batch account. The following steps create a pool that consists of two Standard_A1_v2 size VMs running Windows Server 2019. This node size offers a good balance of performance versus cost for this quickstart.
+
+1. On your Batch account page, select **Pools** from the left navigation.
+
+1. On the **Pools** page, select **Add**.
-1. In the Batch account, select **Pools** > **Add**.
+1. On the **Add pool** page, for **Name**, enter *myPool*.
-1. Enter a **Pool ID** called *mypool*.
+1. Under **Operating System**, select the following settings:
+ - **Publisher**: Select **microsoftwindowsserver**.
+ - **Sku**: Select **2019-datacenter-core-smalldisk**.
-1. In **Operating System**, use the following settings (you can explore other options).
-
- |Setting |Value |
- |||
- |**Image Type**|Marketplace|
- |**Publisher** |microsoftwindowsserver|
- |**Offer** |windowsserver|
- |**Sku** |2019-datacenter-core-smalldisk|
+1. Scroll down to **Node size**, and for **VM size**, select **Standard_A1_v2**.
-1. Scroll down to enter **Node Size** and **Scale** settings. The suggested node size offers a good balance of performance versus cost for this quick example.
-
- |Setting |Value |
- |||
- |**Node pricing tier** |Standard_A1_v2|
- |**Target dedicated nodes** |2|
+1. Under **Scale**, for **Target dedicated nodes**, enter *2*.
-1. Keep the defaults for remaining settings, and select **OK** to create the pool.
+1. Accept the defaults for the remaining settings, and select **OK** at the bottom of the page.
-Batch creates the pool immediately, but it takes a few minutes to allocate and start the compute nodes. During this time, the pool's **Allocation state** is **Resizing**. You can go ahead and create a job and tasks while the pool is resizing.
+Batch creates the pool immediately, but takes a few minutes to allocate and start the compute nodes. On the **Pools** page, you can select **myPool** to go to the **myPool** page and see the pool status of **Resizing** under **Essentials** > **Allocation state**. You can proceed to create a job and tasks while the pool state is still **Resizing** or **Starting**.
-After a few minutes, the allocation state changes to **Steady**, and the nodes start. To check the state of the nodes, select the pool and then select **Nodes**. When a node's state is **Idle**, it is ready to run tasks.
+After a few minutes, the **Allocation state** changes to **Steady**, and the nodes start. To check the state of the nodes, select **Nodes** in the **myPool** page left navigation. When a node's state is **Idle**, it's ready to run tasks.
## Create a job
-Now that you have a pool, create a job to run on it. A Batch job is a logical group of one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. The job won't have tasks until you create them.
+Now create a job to run on the pool. A Batch job is a logical group of one or more tasks. The job includes settings common to the tasks, such as priority and the pool to run tasks on. The job doesn't have tasks until you create them.
-1. In the Batch account view, select **Jobs** > **Add**.
+1. On the **mybatchaccount** page, select **Jobs** from the left navigation.
-1. Enter a **Job ID** called *myjob*.
+1. On the **Jobs** page, select **Add**.
-1. In **Pool**, select *mypool*.
+1. On the **Add job** page, for **Job ID**, enter *myJob*.
-1. Keep the defaults for the remaining settings, and select **OK**.
+1. Select **Select pool**, and on the **Select pool** page, select **myPool**, and then select **Select**.
+
+1. On the **Add job** page, select **OK**. Batch creates the job and lists it on the **Jobs** page.
## Create tasks
-Now, select the job to open the **Tasks** page. This is where you'll create sample tasks to run in the job. Typically, you create multiple tasks that Batch queues and distributes to run on the compute nodes. In this example, you create two identical tasks. Each task runs a command line to display the Batch environment variables on a compute node, and then waits 90 seconds.
+Jobs can contain multiple tasks that Batch queues and distributes to run on the compute nodes. Batch provides several ways to deploy apps and scripts to compute nodes. When you create a task, you specify your app or script in a command line.
+
+The following procedure creates and runs two identical tasks in your job. Each task runs a command line that displays the Batch environment variables on the compute node, and then waits 90 seconds.
-When you use Batch, the command line is where you specify your app or script. Batch provides several ways to deploy apps and scripts to compute nodes.
+1. On the **Jobs** page, select **myJob**.
-To create the first task:
+1. On the **Tasks** page, select **Add**.
-1. Select **Add**.
+1. On the **Add task** page, for **Task ID**, enter *myTask1*.
-1. Enter a **Task ID** called *mytask*.
+1. In **Command line**, enter `cmd /c "set AZ_BATCH & timeout /t 90 > NUL"`.
-1. In **Command line**, enter `cmd /c "set AZ_BATCH & timeout /t 90 > NUL"`. Keep the defaults for the remaining settings, and select **Submit**.
+1. Accept the defaults for the remaining settings, and select **Submit**.
-Repeat the steps above to create a second task. Enter a different **Task ID** such as *mytask2*, but use the same command line.
+1. Repeat the preceding steps to create a second task, but enter *myTask2* for **Task ID**.
-After you create a task, Batch queues it to run on the pool. When a node is available to run it, the task runs. In our example, if the first task is still running on one node, Batch will start the second task on the other node in the pool.
+After you create each task, Batch queues it to run on the pool. Once a node is available, the task runs on the node. In the quickstart example, if the first task is still running on one node, Batch starts the second task on the other node in the pool.
## View task output
-The example tasks you created will complete in a couple of minutes. To view the output of a completed task, select the task, then select the file `stdout.txt` to view the standard output of the task. The contents are similar to the following example:
+The tasks should complete in a couple of minutes. To update task status, select **Refresh** at the top of the **Tasks** page.
+
+To view the output of a completed task, you can select the task from the **Tasks** page. On the **myTask1** page, select the *stdout.txt* file to view the standard output of the task.
-The contents show the Azure Batch environment variables that are set on the node. When you create your own Batch jobs and tasks, you can reference these environment variables in task command lines, and in the apps and scripts run by the command lines.
+The contents of the *stdout.txt* file are similar to the following example:
++
+The standard output for this task shows the Azure Batch environment variables that are set on the node. As long as this node exists, you can refer to these environment variables in Batch job task command lines, and in the apps and scripts the command lines run.
## Clean up resources
-If you want to continue with Batch tutorials and samples, you can keep using the Batch account and linked storage account created in this quickstart. There is no charge for the Batch account itself.
+If you want to continue with Batch tutorials and samples, you can use the Batch account and linked storage account that you created in this quickstart. There's no charge for the Batch account itself.
+
+Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. When you no longer need a pool, delete it.
-You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it. In the account view, select **Pools** and the name of the pool. Then select **Delete**. After you delete the pool, all task output on the nodes is deleted.
+To delete a pool:
-When no longer needed, delete the resource group, Batch account, and all related resources. To do so, select the resource group for the Batch account and select **Delete resource group**.
+1. On your Batch account page, select **Pools** from the left navigation.
+1. On the **Pools** page, select the pool to delete, and then select **Delete**.
+1. On the **Delete pool** screen, enter the name of the pool, and then select **Delete**.
+
+Deleting a pool deletes all task output on the nodes, and the nodes themselves.
+
+When you no longer need any of the resources you created for this quickstart, you can delete the resource group and all its resources, including the storage account, Batch account, and node pools. To delete the resource group, select **Delete resource group** at the top of the **qsBatch** resource group page. On the **Delete a resource group** screen, enter the resource group name *qsBatch*, and then select **Delete**.
## Next steps
-In this quickstart, you created a Batch account, a Batch pool, and a Batch job. The job ran sample tasks, and you viewed output created on one of the nodes. Now that you understand the key concepts of the Batch service, you are ready to try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, continue to the Azure Batch tutorials.
+In this quickstart, you created a Batch account and pool, and created and ran a Batch job and tasks. You monitored node and task status, and viewed task output from the nodes.
+
+Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch, continue to the Azure Batch tutorials.
> [!div class="nextstepaction"] > [Azure Batch tutorials](./tutorial-parallel-dotnet.md)
batch Quick Run Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-run-python.md
Title: Quickstart - Use Python API to run an Azure Batch job
-description: In this quickstart, you run an Azure Batch sample job and tasks using the Batch Python client library. Learn the key concepts of the Batch service.
Previously updated : 09/10/2021
+ Title: 'Quickstart: Use Python to create a pool and run a job'
+description: Follow this quickstart to run an app that uses the Azure Batch client library for Python to create and run Batch pools, nodes, jobs, and tasks.
Last updated : 04/13/2023 ms.devlang: python
-# Quickstart: Use Python API to run an Azure Batch job
+# Quickstart: Use Python to create a Batch pool and run a job
-Get started with Azure Batch by using the Python API to run an Azure Batch job from an app. The app uploads input data files to Azure Storage and creates a pool of Batch compute nodes (virtual machines). It then creates a job that runs tasks to process each input file in the pool using a basic command.
+This quickstart shows you how to get started with Azure Batch by running an app that uses the [Azure Batch libraries for Python](/python/api/overview/azure/batch). The Python app:
-After completing this quickstart, you'll understand key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
+> [!div class="checklist"]
+> - Uploads several input data files to an Azure Storage blob container to use for Batch task processing.
+> - Creates a pool of two virtual machines (VMs), or compute nodes, running Ubuntu 20.04 LTS OS.
+> - Creates a job and three tasks to run on the nodes. Each task processes one of the input files by using a Bash shell command line.
+> - Displays the output files that the tasks return.
-![Overview of the Azure Batch workflow](./media/quick-run-python/overview-of-the-azure-batch-workflow.png)
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+- A Batch account with a linked Azure Storage account. You can create the accounts by using any of the following methods: [Azure CLI](quick-create-cli.md) | [Azure portal](quick-create-portal.md) | [Bicep](quick-create-bicep.md) | [ARM template](quick-create-template.md) | [Terraform](quick-create-terraform.md).
-- [Python](https://python.org/downloads) version 3.6 or later, including the [pip](https://pip.pypa.io/en/stable/installing/) package manager.
+- [Python](https://python.org/downloads) version 3.6 or later, which includes the [pip](https://pip.pypa.io/en/stable/installing) package manager.
-## Sign in to Azure
+## Run the app
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+To complete this quickstart, you download or clone the Python app, provide your account values, run the app, and verify the output.
+### Download or clone the app
-## Download the sample
+1. Download or clone the [Azure Batch Python Quickstart](https://github.com/Azure-Samples/batch-python-quickstart) app from GitHub. Use the following command to clone the app repo with a Git client:
-[Download or clone the sample app](https://github.com/Azure-Samples/batch-python-quickstart) from GitHub. To clone the sample app repo with a Git client, use the following command:
+ ```bash
+ git clone https://github.com/Azure-Samples/batch-python-quickstart.git
+ ```
-```bash
-git clone https://github.com/Azure-Samples/batch-python-quickstart.git
-```
+1. Switch to the *batch-python-quickstart/src* folder, and install the required packages by using `pip`.
-Go to the directory that contains the Python script `python_quickstart_client.py`.
+ ```bash
+ pip install -r requirements.txt
+ ```
-In your Python development environment, install the required packages using `pip`.
+### Provide your account information
-```bash
-pip install -r requirements.txt
-```
+The Python app needs to use your Batch and Storage account names, account key values, and Batch account endpoint. You can get this information from the Azure portal, Azure APIs, or command-line tools.
+
+To get your account information from the [Azure portal](https://portal.azure.com):
+
+ 1. From the Azure Search bar, search for and select your Batch account name.
+ 1. On your Batch account page, select **Keys** from the left navigation.
+ 1. On the **Keys** page, copy the following values:
+
+ - **Batch account**
+ - **Account endpoint**
+ - **Primary access key**
+ - **Storage account name**
+ - **Key1**
-Open the file `config.py`. Update the Batch and storage account credential strings with the values you obtained for your accounts. For example:
+In your downloaded Python app, edit the following strings in the *config.py* file to supply the values you copied.
-```Python
-BATCH_ACCOUNT_NAME = 'mybatchaccount'
-BATCH_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ=='
-BATCH_ACCOUNT_URL = 'https://mybatchaccount.mybatchregion.batch.azure.com'
-STORAGE_ACCOUNT_NAME = 'mystorageaccount'
-STORAGE_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ=='
+```python
+BATCH_ACCOUNT_NAME = '<batch account>'
+BATCH_ACCOUNT_KEY = '<primary access key>'
+BATCH_ACCOUNT_URL = '<account endpoint>'
+STORAGE_ACCOUNT_NAME = '<storage account name>'
+STORAGE_ACCOUNT_KEY = '<key1>'
```
-## Run the app
+>[!IMPORTANT]
+>Exposing account keys in the app source isn't recommended for Production usage. You should restrict access to credentials and refer to them in your code by using variables or a configuration file. It's best to store Batch and Storage account keys in Azure Key Vault.
+
+### Run the app and view output
-To see the Batch workflow in action, run the script:
+Run the app to see the Batch workflow in action.
```bash python python_quickstart_client.py ```
-After running the script, review the code to learn what each part of the application does.
+Typical run time is approximately three minutes. Initial pool node setup takes the most time.
-When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started. Tasks are queued to run as soon as the first compute node is running. Go to your Batch account in the [Azure portal](https://portal.azure.com) to monitor the pool, compute nodes, job, and tasks in your Batch account.
+The app returns output similar to the following example:
```output
-Sample start: 11/26/2018 4:02:54 PM
+Sample start: 11/26/2012 4:02:54 PM
Uploading file taskdata0.txt to container [input]... Uploading file taskdata1.txt to container [input]...
Adding 3 tasks to job [PythonQuickstartJob]...
Monitoring all tasks for 'Completed' state, timeout in 00:30:00... ```
-After tasks complete, you see output similar to the following for each task:
+There's a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes start. As tasks are created, Batch queues them to run on the pool. As soon as the first compute node is available, the first task runs on the node. You can monitor node, task, and job status from your Batch account page in the Azure portal.
+
+After each task completes, you see output similar to the following example:
```output Printing task output... Task: Task0 Node: tvm-2850684224_3-20171205t000401z Standard output:
-Batch processing began with mainframe computers and punch cards. Today it still plays a central role in business, engineering, science, and other pursuits that require running lots of automated tasks....
-...
+Batch processing began with mainframe computers and punch cards. Today it still plays a central role...
```
-Typical execution time is approximately 3 minutes when you run the application in its default configuration. Initial pool setup takes the most time.
- ## Review the code
-The Python app in this quickstart does the following:
--- Uploads three small text files to a blob container in your Azure storage account. These files are inputs for processing by Batch tasks.-- Creates a pool of two compute nodes running Ubuntu 20.04 LTS.-- Creates a job and three tasks to run on the nodes. Each task processes one of the input files using a Bash shell command line.-- Displays files returned by the tasks.
+Review the code to understand the steps in the [Azure Batch Python Quickstart](https://github.com/Azure-Samples/batch-python-quickstart).
-See the file `python_quickstart_client.py` and the following sections for details.
+### Create service clients and upload resource files
-### Preliminaries
+1. The app creates a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object to interact with the Storage account.
-To interact with a storage account, the app creates a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object.
+ ```python
+ blob_service_client = BlobServiceClient(
+ account_url=f"https://{config.STORAGE_ACCOUNT_NAME}.{config.STORAGE_ACCOUNT_DOMAIN}/",
+ credential=config.STORAGE_ACCOUNT_KEY
+ )
+ ```
-```python
-blob_service_client = BlobServiceClient(
- account_url=f"https://{config.STORAGE_ACCOUNT_NAME}.{config.STORAGE_ACCOUNT_DOMAIN}/",
- credential=config.STORAGE_ACCOUNT_KEY
- )
-```
-
-The app uses the `blob_service_client` reference to create a container in the storage account and to upload data files to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
-
-```python
-input_file_paths = [os.path.join(sys.path[0], 'taskdata0.txt'),
- os.path.join(sys.path[0], 'taskdata1.txt'),
- os.path.join(sys.path[0], 'taskdata2.txt')]
+1. The app uses the `blob_service_client` reference to create a container in the Storage account and upload data files to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
-input_files = [
- upload_file_to_container(blob_service_client, input_container_name, file_path)
- for file_path in input_file_paths]
-```
-
-The app creates a [BatchServiceClient](/python/api/azure.batch.batchserviceclient) object to create and manage pools, jobs, and tasks in the Batch service. The Batch client in the sample uses shared key authentication. Batch also supports Azure Active Directory authentication.
+ ```python
+ input_file_paths = [os.path.join(sys.path[0], 'taskdata0.txt'),
+ os.path.join(sys.path[0], 'taskdata1.txt'),
+ os.path.join(sys.path[0], 'taskdata2.txt')]
+
+ input_files = [
+ upload_file_to_container(blob_service_client, input_container_name, file_path)
+ for file_path in input_file_paths]
+ ```
-```python
-credentials = SharedKeyCredentials(config.BATCH_ACCOUNT_NAME,
- config.BATCH_ACCOUNT_KEY)
+1. The app creates a [BatchServiceClient](/python/api/azure.batch.batchserviceclient) object to create and manage pools, jobs, and tasks in the Batch account. The Batch client uses shared key authentication. Batch also supports Azure Active Directory (Azure AD) authentication.
- batch_client = BatchServiceClient(
- credentials,
- batch_url=config.BATCH_ACCOUNT_URL)
-```
+ ```python
+ credentials = SharedKeyCredentials(config.BATCH_ACCOUNT_NAME,
+ config.BATCH_ACCOUNT_KEY)
+
+ batch_client = BatchServiceClient(
+ credentials,
+ batch_url=config.BATCH_ACCOUNT_URL)
+ ```
### Create a pool of compute nodes
-To create a Batch pool, the app uses the [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS image published in the Azure Marketplace. Batch supports a wide range of Linux and Windows Server images in the Azure Marketplace, as well as custom VM images.
+To create a Batch pool, the app uses the [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and pool configuration. The following [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS Azure Marketplace image. Batch supports a wide range of Linux and Windows Server Marketplace images, and also supports custom VM images.
-The number of nodes (`POOL_NODE_COUNT`) and VM size (`POOL_VM_SIZE`) are defined constants. The sample by default creates a pool of 2 size *Standard_DS1_v2* nodes. The size suggested offers a good balance of performance versus cost for this quick example.
+The `POOL_NODE_COUNT` and `POOL_VM_SIZE` are defined constants. The app creates a pool of two size Standard_DS1_v2 nodes. This size offers a good balance of performance versus cost for this quickstart.
-The [pool.add](/python/api/azure-batch/azure.batch.operations.pooloperations) method submits the pool to the Batch service.
+The [pool.add](/python/api/azure-batch/azure.batch.operations.pooloperations#azure-batch-operations-pooloperations-add) method submits the pool to the Batch service.
```python new_pool = batchmodels.PoolAddParameter(
new_pool = batchmodels.PoolAddParameter(
### Create a Batch job
-A Batch job is a logical grouping of one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. The app uses the [JobAddParameter](/python/api/azure-batch/azure.batch.models.jobaddparameter) class to create a job on your pool. The [job.add](/python/api/azure-batch/azure.batch.operations.joboperations) method adds a job to the specified Batch account. Initially the job has no tasks.
+A Batch job is a logical grouping of one or more tasks. The job includes settings common to the tasks, such as priority and the pool to run tasks on.
+
+The app uses the [JobAddParameter](/python/api/azure-batch/azure.batch.models.jobaddparameter) class to create a job on the pool. The [job.add](/python/api/azure-batch/azure.batch.operations.joboperations) method adds the job to the specified Batch account. Initially the job has no tasks.
```python job = batchmodels.JobAddParameter(
batch_service_client.job.add(job)
### Create tasks
-The app creates a list of task objects using the [TaskAddParameter](/python/api/azure-batch/azure.batch.models.taskaddparameter) class. Each task processes an input `resource_files` object using a `command_line` parameter. In the sample, the command line runs the Bash shell `cat` command to display the text file. This command is a simple example for demonstration purposes. When you use Batch, the command line is where you specify your app or script. Batch provides a number of ways to deploy apps and scripts to compute nodes.
+Batch provides several ways to deploy apps and scripts to compute nodes. This app creates a list of task objects by using the [TaskAddParameter](/python/api/azure-batch/azure.batch.models.taskaddparameter) class. Each task processes an input file by using a `command_line` parameter to specify an app or script.
-Then, the app adds tasks to the job with the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations) method, which queues them to run on the compute nodes.
+The following script processes the input `resource_files` objects by running the Bash shell `cat` command to display the text files. The app then uses the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations#azure-batch-operations-taskoperations-add-collection) method to add each task to the job, which queues the tasks to run on the compute nodes.
```python tasks = []
batch_service_client.task.add_collection(job_id, tasks)
### View task output
-The app monitors task state to make sure the tasks complete. Then, the app displays the `stdout.txt` file generated by each completed task. When the task runs successfully, the output of the task command is written to `stdout.txt`:
+The app monitors task state to make sure the tasks complete. When each task runs successfully, the task command output writes to the *stdout.txt* file. The app then displays the *stdout.txt* file for each completed task.
```python tasks = batch_service_client.task.list(job_id)
for task in tasks:
## Clean up resources
-The app automatically deletes the storage container it creates, and gives you the option to delete the Batch pool and job. You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it. When you delete the pool, all task output on the nodes is deleted.
+The app automatically deletes the storage container it creates, and gives you the option to delete the Batch pool and job. Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. If you no longer need the pool, delete it.
-When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and select **Delete resource group**.
+When you no longer need your Batch resources, you can delete the resource group that contains them. In the Azure portal, select **Delete resource group** at the top of the resource group page. On the **Delete a resource group** screen, enter the resource group name, and then select **Delete**.
## Next steps
-In this quickstart, you ran a small app built using the Batch Python API to create a Batch pool and a Batch job. The job ran sample tasks, and downloaded output created on the nodes. Now that you understand the key concepts of the Batch service, you are ready to try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, and walk through a parallel workload with a real-world application, continue to the Batch Python tutorial.
+In this quickstart, you ran an app that uses the Batch Python API to create a Batch pool, nodes, job, and tasks. The job uploaded resource files to a storage container, ran tasks on the nodes, and displayed output from the nodes.
+
+Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch and walk through a parallel workload with a real-world application, continue to the Batch Python tutorial.
> [!div class="nextstepaction"] > [Process a parallel workload with Python](tutorial-parallel-python.md)
batch Tutorial Parallel Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-dotnet.md
Title: Tutorial - Run a parallel workload using the .NET API
-description: Tutorial - Transcode media files in parallel with ffmpeg in Azure Batch using the Batch .NET client library
+ Title: "Tutorial: Run a parallel workload using the .NET API"
+description: Learn how to transcode media files in parallel using ffmpeg in Azure Batch with the Batch .NET client library.
ms.devlang: csharp Previously updated : 06/22/2022 Last updated : 04/19/2023 # Tutorial: Run a parallel workload with Azure Batch using the .NET API
-Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a C# example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources. You learn how to:
+Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a C# example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources.
> [!div class="checklist"]
-> * Add an application package to your Batch account
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
+> * Add an application package to your Batch account.
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
-In this tutorial, you convert MP4 media files in parallel to MP3 format using the [ffmpeg](https://ffmpeg.org/) open-source tool.
+In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org) open-source tool.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)] ## Prerequisites
-* [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1) for Linux, macOS, or Windows.
+* [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet) for Linux, macOS, or Windows.
-* A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+* A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstart guides for the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
-* Download the appropriate version of ffmpeg for your use case to your local computer. This tutorial and the related sample app use the [Windows 64-bit version of ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08). For this tutorial, you only need the zip file. You do not need to unzip the file or install it locally.
+* Download the appropriate version of ffmpeg for your use case to your local computer. This tutorial and the related sample app use the [Windows 64-bit full-build version of ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08). For this tutorial, you only need the zip file. You do not need to unzip the file or install it locally.
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to [the Azure portal](https://portal.azure.com).
## Add an application package Use the Azure portal to add ffmpeg to your Batch account as an [application package](batch-application-packages.md). Application packages help you manage task applications and their deployment to the compute nodes in your pool.
-1. In the Azure portal, click **More services** > **Batch accounts**, and click the name of your Batch account.
-3. Click **Applications** > **Add**.
-4. For **Application id** enter *ffmpeg*, and a package version of *4.3.1*. Select the ffmpeg zip file you downloaded previously, and then click **OK**. The ffmpeg application package is added to your Batch account.
+1. In the Azure portal, click **More services** > **Batch accounts**, and select the name of your Batch account.
-![Add application package](./media/tutorial-parallel-dotnet/add-application.png)
+1. Click **Applications** > **Add**.
+
+ :::image type="content" source="./media/tutorial-parallel-dotnet/add-application.png" alt-text="Screenshot of the Applications section of the batch account.":::
+
+1. Enter *ffmpeg* in the **Application Id** field, and a package version of *4.3.1* in the **Version** field. Select the ffmpeg zip file that you downloaded, and then select **Submit**. The ffmpeg application package is added to your Batch account.
+
+ :::image type="content" source="./media/tutorial-parallel-dotnet/new-batch-application.png" alt-text="Screenshot of the ID and version fields in the Add application section.":::
[!INCLUDE [batch-common-credentials](../../includes/batch-common-credentials.md)]
-## Download and run the sample
+## Download and run the sample app
-### Download the sample
+### Download the sample app
[Download or clone the sample app](https://github.com/Azure-Samples/batch-dotnet-ffmpeg-tutorial) from GitHub. To clone the sample app repo with a Git client, use the following command:
Use the Azure portal to add ffmpeg to your Batch account as an [application pack
git clone https://github.com/Azure-Samples/batch-dotnet-ffmpeg-tutorial.git ```
-Navigate to the directory that contains the Visual Studio solution file `BatchDotNetFfmpegTutorial.sln`.
+Navigate to the directory that contains the Visual Studio solution file *BatchDotNetFfmpegTutorial.sln*.
-Open the solution file in Visual Studio, and update the credential strings in `Program.cs` with the values you obtained for your accounts. For example:
+Open the solution file in Visual Studio, and update the credential strings in *Program.cs* with the values you obtained for your accounts. For example:
```csharp // Batch account credentials
-private const string BatchAccountName = "mybatchaccount";
+private const string BatchAccountName = "yourbatchaccount";
private const string BatchAccountKey = "xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ==";
-private const string BatchAccountUrl = "https://mybatchaccount.mybatchregion.batch.azure.com";
+private const string BatchAccountUrl = "https://yourbatchaccount.yourbatchregion.batch.azure.com";
// Storage account credentials
-private const string StorageAccountName = "mystorageaccount";
+private const string StorageAccountName = "yourstorageaccount";
private const string StorageAccountKey = "xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ=="; ```
const string appPackageVersion = "4.3.1";
Build and run the application in Visual Studio, or at the command line with the `dotnet build` and `dotnet run` commands. After running the application, review the code to learn what each part of the application does. For example, in Visual Studio:
-* Right-click the solution in Solution Explorer and click **Build Solution**.
+1. Right-click the solution in Solution Explorer and select **Build Solution**.
-* Confirm the restoration of any NuGet packages, if you're prompted. If you need to download missing packages, ensure the [NuGet Package Manager](https://docs.nuget.org/consume/installing-nuget) is installed.
+1. Confirm the restoration of any NuGet packages, if you're prompted. If you need to download missing packages, ensure the [NuGet Package Manager](https://docs.nuget.org/consume/installing-nuget) is installed.
-Then run it. When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
+1. Run the solution. When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
``` Sample start: 11/19/2018 3:20:21 PM
Sample end: 11/19/2018 3:29:36 PM
Elapsed time: 00:09:14.3418742 ```
-Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, click **Pools** > *WinFFmpegPool*.
+Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, click **Pools** > **WinFFmpegPool**.
When tasks are running, the heat map is similar to the following:
-![Pool heat map](./media/tutorial-parallel-dotnet/pool.png)
-Typical execution time is approximately **10 minutes** when you run the application in its default configuration. Pool creation takes the most time.
+Typical execution time is approximately *10 minutes* when you run the application in its default configuration. Pool creation takes the most time.
[!INCLUDE [batch-common-tutorial-download](../../includes/batch-common-tutorial-download.md)] ## Review the code
-The following sections break down the sample application into the steps that it performs to process a workload in the Batch service. Refer to the file `Program.cs` in the solution while you read the rest of this article, since not every line of code in the sample is discussed.
+The following sections break down the sample application into the steps that it performs to process a workload in the Batch service. Refer to the file *Program.cs* in the solution while you read the rest of this article, since not every line of code in the sample is discussed.
### Authenticate Blob and Batch clients
CreateContainerIfNotExistAsync(blobClient, inputContainerName);
CreateContainerIfNotExistAsync(blobClient, outputContainerName); ```
-Then, files are uploaded to the input container from the local `InputFiles` folder. The files in storage are defined as Batch [ResourceFile](/dotnet/api/microsoft.azure.batch.resourcefile) objects that Batch can later download to compute nodes.
+Then, files are uploaded to the input container from the local *InputFiles* folder. The files in storage are defined as Batch [ResourceFile](/dotnet/api/microsoft.azure.batch.resourcefile) objects that Batch can later download to compute nodes.
-Two methods in `Program.cs` are involved in uploading the files:
+Two methods in *Program.cs* are involved in uploading the files:
-* `UploadFilesToContainerAsync`: Returns a collection of ResourceFile objects and internally calls `UploadResourceFileToContainerAsync` to upload each file that is passed in the `inputFilePaths` parameter.
-* `UploadResourceFileToContainerAsync`: Uploads each file as a blob to the input container. After uploading the file, it obtains a shared access signature (SAS) for the blob and returns a ResourceFile object to represent it.
+* `UploadFilesToContainerAsync`: Returns a collection of `ResourceFile` objects and internally calls `UploadResourceFileToContainerAsync` to upload each file that is passed in the `inputFilePaths` parameter.
+* `UploadResourceFileToContainerAsync`: Uploads each file as a blob to the input container. After uploading the file, it obtains a shared access signature (SAS) for the blob and returns a `ResourceFile` object to represent it.
```csharp string inputPath = Path.Combine(Environment.CurrentDirectory, "InputFiles");
Next, the sample creates a pool of compute nodes in the Batch account with a cal
The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure does not have enough capacity. The sample by default creates a pool containing only 5 Spot nodes in size *Standard_A1_v2*. >[!Note]
->Be sure you check your node quotas. See [Batch service quotas and limits](batch-quota-limit.md#increase-a-quota) for instructions on how to create a quota request."
+>Be sure you check your node quotas. See [Batch service quotas and limits](batch-quota-limit.md#increase-a-quota) for instructions on how to create a quota request.
The ffmpeg application is deployed to the compute nodes by adding an [ApplicationPackageReference](/dotnet/api/microsoft.azure.batch.applicationpackagereference) to the pool configuration.
The sample creates an [OutputFile](/dotnet/api/microsoft.azure.batch.outputfile)
Then, the sample adds tasks to the job with the [AddTaskAsync](/dotnet/api/microsoft.azure.batch.joboperations.addtaskasync) method, which queues them to run on the compute nodes.
-Replace the executable's file path with the name of the version that you downloaded. This sample code uses the example `ffmpeg-4.3.1-2020-09-21-full_build`.
+Replace the executable's file path with the name of the version that you downloaded. This sample code uses the example `ffmpeg-4.3.1-2020-11-08-full_build`.
```csharp // Create a collection to hold the tasks added to the job.
When no longer needed, delete the resource group, Batch account, and storage acc
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Add an application package to your Batch account
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
-
-For more examples of using the .NET API to schedule and process Batch workloads, see the samples on GitHub.
-
-> [!div class="nextstepaction"]
-> [Batch C# samples](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp)
+> * Add an application package to your Batch account.
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
+
+For more examples of using the .NET API to schedule and process Batch workloads, see the [Batch C# samples on GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp).
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Title: Tutorial - Run a parallel workload using the Python API
-description: Tutorial - Process media files in parallel with ffmpeg in Azure Batch using the Batch Python client library
+ Title: "Tutorial: Run a parallel workload using the Python API"
+description: Learn how to process media files in parallel using ffmpeg in Azure Batch with the Batch Python client library.
ms.devlang: python Previously updated : 12/13/2021 Last updated : 04/19/2023 # Tutorial: Run a parallel workload with Azure Batch using the Python API
-Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a Python example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources. You learn how to:
+Use Azure Batch to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. This tutorial walks through a Python example of running a parallel workload using Batch. You learn a common Batch application workflow and how to interact programmatically with Batch and Storage resources.
> [!div class="checklist"]
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
-In this tutorial, you convert MP4 media files in parallel to MP3 format using the [ffmpeg](https://ffmpeg.org/) open-source tool.
+In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)] ## Prerequisites
-* [Python version 3.7+](https://www.python.org/downloads/)
+* [Python version 3.7 or later](https://www.python.org/downloads/)
-* [pip](https://pip.pypa.io/en/stable/installing/) package manager
+* [pip package manager](https://pip.pypa.io/en/stable/installation/)
-* An Azure Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+* An Azure Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstart guides for [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
[!INCLUDE [batch-common-credentials](../../includes/batch-common-credentials.md)]
-## Download and run the sample
+## Download and run the sample app
-### Download the sample
+### Download the sample app
[Download or clone the sample app](https://github.com/Azure-Samples/batch-python-ffmpeg-tutorial) from GitHub. To clone the sample app repo with a Git client, use the following command:
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
git clone https://github.com/Azure-Samples/batch-python-ffmpeg-tutorial.git ```
-Navigate to the directory that contains the file `batch_python_tutorial_ffmpeg.py`.
+Navigate to the directory that contains the file *batch_python_tutorial_ffmpeg.py*.
In your Python environment, install the required packages using `pip`.
In your Python environment, install the required packages using `pip`.
pip install -r requirements.txt ```
-Open the file `config.py`. Update the Batch and storage account credential strings with the values unique to your accounts. For example:
+Use a code editor to open the file *config.py*. Update the Batch and storage account credential strings with the values unique to your accounts. For example:
```Python
-_BATCH_ACCOUNT_NAME = 'mybatchaccount'
+_BATCH_ACCOUNT_NAME = 'yourbatchaccount'
_BATCH_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ=='
-_BATCH_ACCOUNT_URL = 'https://mybatchaccount.mybatchregion.batch.azure.com'
+_BATCH_ACCOUNT_URL = 'https://yourbatchaccount.yourbatchregion.batch.azure.com'
_STORAGE_ACCOUNT_NAME = 'mystorageaccount' _STORAGE_ACCOUNT_KEY = 'xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ==' ```
Sample end: 11/28/2018 3:29:36 PM
Elapsed time: 00:09:14.3418742 ```
-Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, click **Pools** > *LinuxFFmpegPool*.
+Go to your Batch account in the Azure portal to monitor the pool, compute nodes, job, and tasks. For example, to see a heat map of the compute nodes in your pool, select **Pools** > **LinuxFFmpegPool**.
When tasks are running, the heat map is similar to the following:
-![Pool heat map](./media/tutorial-parallel-python/pool.png)
-Typical execution time is approximately **5 minutes** when you run the application in its default configuration. Pool creation takes the most time.
+Typical execution time is approximately *5 minutes* when you run the application in its default configuration. Pool creation takes the most time.
[!INCLUDE [batch-common-tutorial-download](../../includes/batch-common-tutorial-download.md)]
batch_client = batch.BatchServiceClient(
### Upload input files
-The app uses the `blob_client` reference create a storage container for the input MP4 files and a container for the task output. Then, it calls the `upload_file_to_container` function to upload MP4 files in the local `InputFiles` directory to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
+The app uses the `blob_client` reference create a storage container for the input MP4 files and a container for the task output. Then, it calls the `upload_file_to_container` function to upload MP4 files in the local *InputFiles* directory to the container. The files in storage are defined as Batch [ResourceFile](/python/api/azure-batch/azure.batch.models.resourcefile) objects that Batch can later download to compute nodes.
```python blob_client.create_container(input_container_name, fail_on_exist=False)
input_files = [
Next, the sample creates a pool of compute nodes in the Batch account with a call to `create_pool`. This defined function uses the Batch [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 18.04 LTS image published in the Azure Marketplace. Batch supports a wide range of VM images in the Azure Marketplace, as well as custom VM images.
-The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure does not have enough capacity. The sample by default creates a pool containing only 5 Spot nodes in size *Standard_A1_v2*.
+The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
In addition to physical node properties, this pool configuration includes a [StartTask](/python/api/azure-batch/azure.batch.models.starttask) object. The StartTask executes on each node as that node joins the pool, and each time a node is restarted. In this example, the StartTask runs Bash shell commands to install the ffmpeg package and dependencies on the nodes.
while datetime.datetime.now() < timeout_expiration:
After it runs the tasks, the app automatically deletes the input storage container it created, and gives you the option to delete the Batch pool and job. The BatchClient's [JobOperations](/python/api/azure-batch/azure.batch.operations.joboperations) and [PoolOperations](/python/api/azure-batch/azure.batch.operations.pooloperations) classes both have delete methods, which are called if you confirm deletion. Although you're not charged for jobs and tasks themselves, you are charged for compute nodes. Thus, we recommend that you allocate pools only as needed. When you delete the pool, all task output on the nodes is deleted. However, the input and output files remain in the storage account.
-When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and click **Delete resource group**.
+When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and choose **Delete resource group**.
## Next steps In this tutorial, you learned how to: > [!div class="checklist"]
-> * Authenticate with Batch and Storage accounts
-> * Upload input files to Storage
-> * Create a pool of compute nodes to run an application
-> * Create a job and tasks to process input files
-> * Monitor task execution
-> * Retrieve output files
-
-For more examples of using the Python API to schedule and process Batch workloads, see the samples on GitHub.
-
-> [!div class="nextstepaction"]
-> [Batch Python samples](https://github.com/Azure/azure-batch-samples/tree/master/Python/Batch)
-
+> * Authenticate with Batch and Storage accounts.
+> * Upload input files to Storage.
+> * Create a pool of compute nodes to run an application.
+> * Create a job and tasks to process input files.
+> * Monitor task execution.
+> * Retrieve output files.
+
+For more examples of using the Python API to schedule and process Batch workloads, see the [Batch Python samples](https://github.com/Azure/azure-batch-samples/tree/master/Python/Batch) on GitHub.
chaos-studio Chaos Studio Tutorial Dynamic Target Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md
+
+ Title: Create a chaos experiment to shut down all targets in a zone
+description: Use the Azure portal to create an experiment that uses dynamic targeting to select hosts in a zone
++++ Last updated : 4/19/2023+++
+# Create a chaos experiment to shut down all targets in a zone
+
+You can use dynamic targeting in a chaos experiment to choose a set of targets to run an experiment against, based on criteria evaluated at experiment runtime. This guide shows how you can dynamically target a Virtual Machine Scale Set to shut down instances based on availability zone. Running this experiment can help you test failover to a Virtual Machine Scale Sets instance in a different region if there's an outage.
+
+These same steps can be used to set up and run an experiment for any fault that supports dynamic targeting. Currently, only Virtual Machine Scale Sets shutdown supports dynamic targeting.
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure Virtual Machine Scale Sets instance.
+
+## Enable Chaos Studio on your Virtual Machine Scale Sets
+
+Chaos Studio can't inject faults against a resource until that resource has been onboarded to Chaos Studio. To onboard a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Virtual Machine Scale Sets only has one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`), but other resources may have up to two target types (one for service-direct faults and one for agent-based faults) and many capabilities.
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Search for **Chaos Studio** in the search bar.
+1. Select **Targets** and find your Virtual Machine Scale Sets resource.
+1. With the Virtual Machine Scale Sets resource selected, select **Enable targets** and **Enable service-direct targets**.
+[ ![A screenshot showing the targets screen within Chaos Studio, with the VMSS resource selected.](images/tutorial-dynamic-targets-enable.png) ](images/tutorial-dynamic-targets-enable.png#lightbox)
+1. Select **Review + Enable** and **Enable**.
+
+You've now successfully onboarded your Virtual Machine Scale Set to Chaos Studio.
+
+## Create an experiment
+
+With your Virtual Machine Scale Sets now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel.
+
+1. Within Chaos Studio, navigate to **Experiments** and select **Create**.
+[ ![A screenshot showing the Experiments screen, with the Create button highlighted.](images/tutorial-dynamic-targets-experiment-browse.png)](images/tutorial-dynamic-targets-experiment-browse.png#lightbox)
+1. Add a name for your experiment that complies with resource naming guidelines, and select **Next: Experiment designer**.
+[ ![A screenshot showing the experiment creation screen, with the Next button highlighted.](images/tutorial-dynamic-targets-create-exp.png)](images/tutorial-dynamic-targets-create-exp.png#lightbox)
+1. Within Step 1 and Branch 1, select **Add action**, then **Add fault**.
+[ ![A screenshot showing the experiment creation screen, with the Add Fault button highlighted.](images/tutorial-dynamic-targets-experiment-fault.png)](images/tutorial-dynamic-targets-experiment-fault.png#lightbox)
+1. Select the **VMSS Shutdown (version 2.0)** fault. Choose your desired duration and whether you want the shutdown to be abrupt, then select **Next: Target resources**.
+[ ![A screenshot showing the fault details view.](images/tutorial-dynamic-targets-fault-details.png)](images/tutorial-dynamic-targets-fault-details.png#lightbox)
+1. Choose the Virtual Machine Scale Sets resource that you want to use in the experiment, then select **Next: Scope**.
+[ ![A screenshot showing the fault details view, with the VMSS resource selected.](images/tutorial-dynamic-targets-fault-resources.png)](images/tutorial-dynamic-targets-fault-resources.png#lightbox)
+1. In the Zones dropdown, select the zone where you want Virtual Machines in the Virtual Machine Scale Sets instance to be shut down, then select **Add**.
+[ ![A screenshot showing the fault details view, with only Zone 1 selected.](images/tutorial-dynamic-targets-fault-zones.png)](images/tutorial-dynamic-targets-fault-zones.png#lightbox)
+1. Select **Review + create** and then **Create** to save the experiment.
+
+## Give experiment permission to your Virtual Machine Scale Sets
+
+When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. These steps can be used for any resource and target type by modifying the role assignment in step #3 to match the [appropriate role for that resource and target type](chaos-studio-fault-providers.md).
+
+1. Navigate to your Virtual Machine Scale Sets resource and select **Access control (IAM)**, then select **Add role assignment**.
+[ ![A screenshot of the Virtual Machine Scale Sets resource page.](images/tutorial-dynamic-targets-vmss-iam.png)](images/tutorial-dynamic-targets-vmss-iam.png#lightbox)
+3. In the **Role** tab, choose **Virtual Machine Contributor** and then select **Next**.
+[ ![A screenshot of the access control overview for Virtual Machine Scale Sets.](images/tutorial-dynamic-targets-role-selection.png)](images/tutorial-dynamic-targets-role-selection.png#lightbox)
+1. Choose **Select members** and search for your experiment name. Choose your experiment and then **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added.
+[ ![A screenshot of the access control overview.](images/tutorial-dynamic-targets-role-assignment.png)](images/tutorial-dynamic-targets-role-assignment.png#lightbox)
+1. Select **Review + assign** then **Review + assign**.
+[ ![A screenshot of the access control confirmation page.](images/tutorial-dynamic-targets-role-confirmation.png)](images/tutorial-dynamic-targets-role-confirmation.png#lightbox)
++
+## Run your experiment
+
+You're now ready to run your experiment!
+
+1. In **Chaos Studio**, navigate to the **Experiments** view, choose your experiment, and select **Start**.
+[ ![A screenshot of the Experiments view, with the Start button highlighted.](images/tutorial-dynamic-targets-start-experiment.png)](images/tutorial-dynamic-targets-start-experiment.png#lightbox)
+1. Select **OK** to confirm that you want to start the experiment.
+1. When the **Status** changes to **Running**, select **Details** for the latest run under **History** to see details for the running experiment. If any errors occur, you can view them within **Details** by selecting a failed Action and expanding **Failed targets**.
+
+To see the impact, use a tool such as **Azure Monitor** or the **Virtual Machine Scale Sets** section of the portal to check if your Virtual Machine Scale Sets targets are shut down. If they're shut down, check to see that the services running on your Virtual Machine Scale Sets are still running as expected.
+
+In this example, the chaos experiment successfully shut down the instance in Zone 1, as expected.
+[ ![A screenshot of the Virtual Machine Scale Sets resource page showing an instance in the Stopped state.](images/tutorial-dynamic-targets-view-vmss.png)](images/tutorial-dynamic-targets-view-vmss.png#lightbox)
+
+## Next steps
+
+> [!TIP]
+> If your Virtual Machine Scale Set uses an autoscale policy, the policy will provision new VMs after this experiment shuts down existing VMs. To prevent this, add a parallel branch in your experiment that includes the **Disable Autoscale** fault against the Virtual Machine Scale Set's `microsoft.insights/autoscaleSettings` resource. Remember to onboard the autoscaleSettings resource as a Target and assign the role.
+
+Now that you've run a dynamically targeted Virtual Machine Scale Sets shutdown experiment, you're ready to:
+- [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md)
+- [Manage your experiment](chaos-studio-run-experiment.md)
cognitive-services Firewalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/firewalls.md
Previously updated : 12/06/2021 Last updated : 04/19/2023
-# How to translate behind IP firewalls with Translator
+# Use Translator behind firewalls
-Translator can translate behind firewalls using either domain-name or IP filtering. Domain-name filtering is the preferred method. If you still require IP filtering, we suggest you to get the [IP addresses details using service tag](../../virtual-network/service-tags-overview.md#service-tags-on-premises). Translator is under the **CognitiveServicesManagement** service tag.
+Translator can translate behind firewalls using either [Domain-name](../../firewall/dns-settings.md#configure-dns-proxyazure-portal) or [IP filtering](#configure-firewall). Domain-name filtering is the preferred method.
-We **do not recommend** running Microsoft Translator from behind a specific IP filtered firewall. The setup is likely to break in the future without notice.
+If you still require IP filtering, you can get the [IP addresses details using service tag](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). Translator is under the **CognitiveServicesManagement** service tag.
+
+## Configure firewall
+
+ Navigate to your Translator resource in the Azure portal.
+
+1. Select **Networking** from the **Resource Management** section.
+1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="media/firewall-setting-azure-portal.png" alt-text="Screenshot of the firewall setting in the Azure portal.":::
+
+ > [!NOTE]
+ >
+ > * Once you enable **Selected Networks and Private Endpoints**, you must use the **Virtual Network** endpoint to call the Translator. You can't use the standard translator endpoint (`api.cognitive.microsofttranslator.com`) and you can't authenticate with an access token.
+ > * For more information, *see* [**Virtual Network Support**](reference/v3-0-reference.md#virtual-network-support).
+
+1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (`non-reserved`) addresses are accepted.
+
+Running Microsoft Translator from behind a specific IP filtered firewall is **not recommended**. The setup is likely to break in the future without notice.
The IP addresses for Translator geographical endpoints as of September 21, 2021 are:
The IP addresses for Translator geographical endpoints as of September 21, 2021
|United States|api-nam.cognitive.microsofttranslator.com|20.42.6.144, 20.49.96.128, 40.80.190.224, 40.64.128.192| |Europe|api-eur.cognitive.microsofttranslator.com|20.50.1.16, 20.38.87.129| |Asia Pacific|api-apc.cognitive.microsofttranslator.com|40.80.170.160, 20.43.132.96, 20.37.196.160, 20.43.66.16|+
+## Next steps
+
+[**Translator virtual network support**](reference/v3-0-reference.md#virtual-network-support)
+
+[**Configure virtual networks**](../cognitive-services-virtual-networks.md#grant-access-from-an-internet-ip-range)
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-reference.md
Previously updated : 12/06/2021 Last updated : 04/20/2023
Version 3 of the Translator provides a modern JSON-based Web API. It improves us
Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there's a datacenter failure when using the global endpoint, the request may be routed outside of the geography.
-To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
+To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
|Geography|Base URL (geographical endpoint)|Datacenters| |:--|:--|:--|
-|Global (non-regional)| api.cognitive.microsofttranslator.com|Closest available datacenter|
+|Global (`non-regional`)| api.cognitive.microsofttranslator.com|Closest available datacenter|
|Asia Pacific| api-apc.cognitive.microsofttranslator.com|Korea South, Japan East, Southeast Asia, and Australia East| |Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe| |United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2|
-<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
+<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
```curl // Pass secret key and region using headers to a custom endpoint curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.
-H "Content-Type: application/json" \ -d "[{'Text':'Hello'}]" -v ```
-<sup>2</sup> Custom Translator isn't currently available in Switzerland.
+<sup>`2`</sup> Custom Translator isn't currently available in Switzerland.
## Authentication
There are three headers that you can use to authenticate your subscription. This
|Authorization|*Use with Cognitive Services subscription if you're passing an authentication token.*<br/>The value is the Bearer token: `Bearer <token>`.| |Ocp-Apim-Subscription-Region|*Use with Cognitive Services multi-service and regional translator resource.*<br/>The value is the region of the multi-service or regional translator resource. This value is optional when using a global translator resource.|
-### Secret key
+### Secret key
+ The first option is to authenticate using the `Ocp-Apim-Subscription-Key` header. Add the `Ocp-Apim-Subscription-Key: <YOUR_SECRET_KEY>` header to your request. #### Authenticating with a global resource
An authentication token is valid for 10 minutes. The token should be reused when
|:--|:-| |Authorization| The value is an access **bearer token** generated by Azure AD.</br><ul><li> The bearer token provides proof of authentication and validates the client's authorization to use the resource.</li><li> An authentication token is valid for 10 minutes and should be reused when making multiple calls to Translator.</br></li>*See* [Sample request: 2. Get a token](../../authentication.md?tabs=powershell#sample-request)</ul>| |Ocp-Apim-Subscription-Region| The value is the region of the **translator resource**.</br><ul><li> This value is optional if the resource is global.</li></ul>|
-|Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You'll find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
+|Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
##### **Translator property pageΓÇöAzure portal**
Once you turn on this capability, you must use the custom endpoint to call the T
You can find the custom endpoint after you create a [translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) and allow access from selected networks and private endpoints.
+1. Navigate to your Translator resource in the Azure portal.
+1. Select **Networking** from the **Resource Management** section.
+1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="../media/virtual-network-setting-azure-portal.png" alt-text="Screenshot of the virtual network setting in the Azure portal.":::
+
+1. Select **Save** to apply your changes.
+1. Select **Keys and Endpoint** from the **Resource Management** section.
+1. Select the **Virtual Network** tab.
+1. Listed there are the endpoints for Text Translation and Document Translation.
+
+ :::image type="content" source="../media/virtual-network-endpoint.png" alt-text="Screenshot of the virtual network endpoint.":::
+ |Headers|Description| |:--|:-| |Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
curl -X POST "https://<your-custom-domain>.cognitiveservices.azure.com/translato
A standard error response is a JSON object with name/value pair named `error`. The value is also a JSON object with properties:
- * `code`: A server-defined error code.
- * `message`: A string giving a human-readable representation of the error.
+* `code`: A server-defined error code.
+* `message`: A string giving a human-readable representation of the error.
For example, a customer with a free trial subscription would receive the following error once the free quota is exhausted:
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
- Title: Custom Text Analytics for health data formats-
-description: Learn about the data formats accepted by custom text analytics for health.
------ Previously updated : 04/14/2023----
-# Accepted data formats in custom text analytics for health
-
-Use this article to learn about formatting your data to be imported into custom text analytics for health.
-
-If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
-
-Your Labels file should be in the `json` format below to be used when importing your labels into a project.
-
-```json
-{
- "projectFileVersion": "{API-VERSION}",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectName": "{PROJECT-NAME}",
- "projectKind": "CustomHealthcare",
- "description": "Trying out custom Text Analytics for health",
- "language": "{LANGUAGE-CODE}",
- "multilingual": true,
- "storageInputContainerName": "{CONTAINER-NAME}",
- "settings": {}
- },
- "assets": {
- "projectKind": "CustomHealthcare",
- "entities": [
- {
- "category": "Entity1",
- "compositionSetting": "{COMPOSITION-SETTING}",
- "list": {
- "sublists": [
- {
- "listKey": "One",
- "synonyms": [
- {
- "language": "en",
- "values": [
- "EntityNumberOne",
- "FirstEntity"
- ]
- }
- ]
- }
- ]
- }
- },
- {
- "category": "Entity2"
- },
- {
- "category": "MedicationName",
- "list": {
- "sublists": [
- {
- "listKey": "research drugs",
- "synonyms": [
- {
- "language": "en",
- "values": [
- "rdrug a",
- "rdrug b"
- ]
- }
- ]
-
- }
- ]
- }
- "prebuilts": "MedicationName"
- }
- ],
- "documents": [
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 500,
- "labels": [
- {
- "category": "Entity1",
- "offset": 25,
- "length": 10
- },
- {
- "category": "Entity2",
- "offset": 120,
- "length": 8
- }
- ]
- }
- ]
- },
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 100,
- "labels": [
- {
- "category": "Entity2",
- "offset": 20,
- "length": 5
- }
- ]
- }
- ]
- }
- ]
- }
-}
-
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
-|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
-| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
-| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
-|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
-| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
-|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
-| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
-|`synonyms`|`[]`|Array containing all the synonyms|synonym|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
-| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
-| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
-| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
-| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
-| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
-| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
-| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
-| `offset` | | The start position for the entity text. | `25`|
-| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
-
-## Entity naming rules
-
-1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
-2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
-3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
---
-## Next steps
-* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
-* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
-* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
- Title: Entity components in custom Text Analytics for health-
-description: Learn how custom Text Analytics for health extracts entities from text
------ Previously updated : 04/14/2023----
-# Entity components in custom text analytics for health
-
-In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
-
-## Component types
-
-An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
-
-The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
-
-### Learned component
-
-The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
-
-The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
--
-### List component
-
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
-
-In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
---
-### Prebuilt component
-
-The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
---
-## Entity options
-
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
-
-### Combine components
-
-Combine components as one entity when they overlap by taking the union of all the components.
-
-Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
--
-By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
--
-Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
--
-With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
---
-### Don't combine components
-
-Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
--
-When you do not combine components, the entity will return twice:
---
-## How to use components and options
-
-Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-
-A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
-
-Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-
-When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
--
-## Next steps
-
-* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
- Title: Custom text analytics for health evaluation metrics-
-description: Learn about evaluation metrics in custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Evaluation metrics for custom Text Analytics for health models
-
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
-
->[!NOTE]
-> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
-
-## Model-level and entity-level evaluation metrics
-
-Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
-
-The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
-
-### Example
-
-*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
-
-The model extracting entities from this text could have the following predictions:
-
-| Entity | Predicted as | Actual type |
-|--|--|--|
-| John Smith | Person | Person |
-| Frederick | Person | City |
-| Forrest | City | Person |
-| Fannie Thomas | Person | Person |
-| Colorado Springs | City | City |
-
-### Entity-level evaluation for the *person* entity
-
-The model would have the following entity-level evaluation, for the *person* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
-| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-
-* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
-
-### Entity-level evaluation for the *city* entity
-
-The model would have the following entity-level evaluation, for the *city* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
-| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Model-level evaluation for the collective model
-
-The model would have the following evaluation for the model in its entirety:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
-| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
-| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
-
-## Interpreting entity-level evaluation metrics
-
-So what does it actually mean to have high precision or high recall for a certain entity?
-
-| Recall | Precision | Interpretation |
-|--|--|--|
-| High | High | This entity is handled well by the model. |
-| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
-| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
-| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
-
-## Guidance
-
-After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
-
-* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
-
-* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
-
-* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
-
-* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
-
-* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
--
-## Confusion matrix
-
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
-The matrix compares the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
-
-The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
--
-You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
-
-* The values in the diagonal are the *True Positive* values of each entity.
-* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
-
-Similarly,
-
-* The *true positive* of the model is the sum of *true Positives* for all entities.
-* The *false positive* of the model is the sum of *false positives* for all entities.
-* The *false Negative* of the model is the sum of *false negatives* for all entities.
-
-## Next steps
-
-* [Custom text analytics for health overview](../overview.md)
-* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
- Title: Send a custom Text Analytics for health request to your custom model
-description: Learn how to send a request for custom text analytics for health.
------- Previously updated : 04/14/2023----
-# Send queries to your custom Text Analytics for health model
-
-After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
-You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
-
-## Test deployed model
-
-You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
--
-## Send a custom text analytics for health request to your model
-
-# [Language Studio](#tab/language-studio)
--
-# [REST API](#tab/rest-api)
-
-First you will need to get your resource key and endpoint:
--
-### Submit a custom Text Analytics for health task
--
-### Get task results
-----
-## Next steps
-
-* [Custom text analytics for health](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
- Title: Using Azure resources in custom Text Analytics for health-
-description: Learn about the steps for using Azure resources with custom text analytics for health.
------ Previously updated : 04/14/2023----
-# How to create custom Text Analytics for health project
-
-Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
-
-## Prerequisites
-
-Before you start using custom text analytics for health, you need:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Create a Language resource
-
-Before you start using custom text analytics for health, you'll need an Azure Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
-
-You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
-
-## Create Language resource and connect storage account
-
-You can create a resource in the following ways:
-
-* The Azure portal
-* Language Studio
-* PowerShell
-
-> [!Note]
-> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
-----
-> [!NOTE]
-> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
-> * You can only connect your language resource to one storage account.
-
-## Using a pre-existing Language resource
--
-## Create a custom Text Analytics for health project
-
-Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Import project
-
-If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Get project details
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Delete project
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
-
-* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
- Title: Deploy a custom Text Analytics for health model-
-description: Learn about deploying a model for custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Deploy a custom text analytics for health model
-
-Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-
-For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
-
-## Deploy model
-
-After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
- Title: Preparing data and designing a schema for custom Text Analytics for health-
-description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
------ Previously updated : 04/14/2023----
-# How to prepare data and define a schema for custom Text Analytics for health
-
-In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
-
-## Schema design
-
-Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
-
-* Review documents in your dataset to be familiar with their format and structure.
-
-* Identify the entities you want to extract from the data.
-
- For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
-
-* Avoid entity types ambiguity.
-
- **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
-
- For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
-
-* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
-
- For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
--
-## Add entities
-
-To add entities to your project:
-
-1. Move to **Entities** pivot from the top of the page.
-
-2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
-
-3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
-
-4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
-
-5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
-
-
-### Add list component
-
-To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
-
-1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
-
-2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
-
- <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
-
-### Define entity options
-
-Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
-
- <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
--
-After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
--
-## Data selection
-
-The quality of data you train your model with affects model performance greatly.
-
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
-
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
-
-* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-
-* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
-
-> [!NOTE]
-> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
-
-## Data preparation
-
-As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-
-* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
-
-You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
-
-## Test set
-
-When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
-
-## Next steps
-
-If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
- Title: Back up and recover your custom Text Analytics for health models-
-description: Learn how to save and recover your custom Text Analytics for health models.
------ Previously updated : 04/14/2023----
-# Back up and recover your custom Text Analytics for health models
-
-When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
-
-If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
-
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
--
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
---
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get training status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
-
- [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
--
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
-
-* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
- Title: How to label your data for custom Text Analytics for health-
-description: Learn how to label your data for use with custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Label your data using the Language Studio
-
-Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
-
-## Prerequisites
-
-Before you can label your data, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data labeling guidelines
-
-After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
-
-As you label your data, keep in mind:
-
-* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
-
-If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
-
-* In general, more labeled data leads to better results, provided the data is labeled accurately.
-
-* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the documents.
- * **Label completely**: Label all the instances of the entity in all your documents.
-
- > [!NOTE]
- > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
-
-## Label your data
-
-Use the following steps to label your data:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
-
- <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
-
- >[!TIP]
- > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
- > You can also use the filters to view the documents that are labeled with a specific entity type.
-
-3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
-
- > [!NOTE]
- > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
-
-4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
-
- <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
-
-5. You have two options to label your document:
-
- |Option |Description |
- |||
- |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
- |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
-
- The below screenshot shows labeling using a brush.
-
- :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
-
-6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
-
-7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
-
- > [!TIP]
- > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
-
-7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
- * *Total instances* where you can view count of all labeled instances of a specific entity type.
- * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
-
-7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
-
-## Remove labels
-
-To remove a label
-
-1. Select the entity you want to remove a label from.
-2. Scroll through the menu that appears, and select **Remove label**.
-
-## Delete entities
-
-You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
-
-## Next steps
-
-After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
- Title: How to train your custom Text Analytics for health model-
-description: Learn about how to train your model for custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Train your custom Text Analytics for health model
-
-Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
-
-To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
--
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data splitting
-
-Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
-The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
-After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
-
-Custom Text Analytics for health supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
-
-## Train model
-
-# [Language studio](#tab/Language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-### Start training job
--
-### Get training job status
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
-
- [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
---
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
- Title: Evaluate a Custom Text Analytics for health model-
-description: Learn how to evaluate and score your Custom Text Analytics for health model
------ Previously updated : 04/14/2023-----
-# View a custom text analytics for health model's evaluation and details
-
-After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
-
-## Prerequisites
-
-Before viewing model evaluation, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md)
-* A [successfully trained model](train-model.md)
--
-## Model details
-
-There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* [Deploy your model](deploy-model.md)
-* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/language-support.md
- Title: Language and region support for custom Text Analytics for health-
-description: Learn about the languages and regions supported by custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Language support for custom text analytics for health
-
-Use this article to learn about the languages currently supported by custom Text Analytics for health.
-
-## Multilingual option
-
-With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
-
-You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
-
-## Language support
-
-Custom Text Analytics for health supports `.txt` files in the following languages:
-
-| Language | Language code |
-| | |
-| English | `en` |
-| French | `fr` |
-| German | `de` |
-| Spanish | `es` |
-| Italian | `it` |
-| Portuguese (Portugal) | `pt-pt` |
-| Hebrew | `he` |
--
-## Next steps
-
-* [Custom Text Analytics for health overview](overview.md)
-* [Service limits](reference/service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/overview.md
- Title: Custom Text Analytics for health - Azure Cognitive Services-
-description: Customize an AI model to label and extract healthcare information from documents using Azure Cognitive Services.
------ Previously updated : 04/14/2023----
-# What is custom Text Analytics for health?
-
-Custom Text Analytics for health is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
-
-Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
-
-## Example usage scenarios
-
-Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
--
-## Project development lifecycle
-
-Using custom Text Analytics for health typically involves several different steps.
--
-* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
-
-* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the files.
- * **Label completely**: Label all the instances of the entity in all your files.
-
-* **Train the model**: Your model starts learning from your labeled data.
-
-* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
-
-* **Deploy the model**: Deploying a model makes it available for use via an API.
-
-* **Extract entities**: Use your custom models for entity extraction tasks.
-
-## Reference documentation and code samples
-
-As you use custom Text Analytics for health, see the following reference documentation for Azure Cognitive Services for Language:
-
-|APIs| Reference documentation|
-||||
-|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
-|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
--
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
---
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
-
-* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/quickstart.md
- Title: Quickstart - Custom Text Analytics for health (Custom TA4H)-
-description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
------ Previously updated : 04/14/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: custom Text Analytics for health
-
-Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
-
-In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
-------
-## Next steps
-
-* [Text analytics for health overview](./overview.md)
-
-After you've created entity extraction model, you can:
-
-* [Use the runtime API to extract entities](how-to/call-api.md)
-
-When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
-
-* [Data selection and schema design](how-to/design-schema.md)
-* [Tag data](how-to/label-data.md)
-* [Train a model](how-to/train-model.md)
-* [Model evaluation](how-to/view-model-evaluation.md)
-
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/glossary.md
- Title: Definitions used in custom Text Analytics for health-
-description: Learn about definitions used in custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Terms and definitions used in custom Text Analytics for health
-
-Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
-
-## Entity
-Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Prebuilt entity component
-
-Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
--
-## Learned entity component
-
-The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
-
-## List entity component
-A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
-
-The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
-
-## Model
-A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
-
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Overfitting
-
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
-## Project
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-
-## Recall
-Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
--
-## Schema
-Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
-
-## Training data
-Training data is the set of information that is needed to train a model.
--
-## Next steps
-
-* [Data and service limits](service-limits.md).
-
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
- Title: Custom Text Analytics for health service limits-
-description: Learn about the data and service limits when using Custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Custom Text Analytics for health service limits
-
-Use this article to learn about the data and service limits when using custom Text Analytics for health.
-
-## Language resource limits
-
-* Your Language resource has to be created in one of the [supported regions](#regional-availability).
-
-* Your resource must be one of the supported pricing tiers:
-
- |Tier|Description|Limit|
- |--|--|--|
- |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
-
-
-* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
-
-* You can have up to 500 projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-## Regional availability
-
-Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| East US | Γ£ô | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
-
-> [!TIP]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|S|Training time| Unlimited, free |
-|S|Prediction Calls| 5,000 text records for free per language resource|
-
-## Document limits
-
-* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-
-* All files uploaded in your container must contain data. Empty files are not allowed for training.
-
-* All files should be available at the root of your container.
-
-## Data limits
-
-The following limits are observed for authoring.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Documents count | 10 | 100,000 |
-|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
-|Count of entity types | 1 | 200 |
-|Entity length in characters | 1 | 500 |
-|Count of trained models per project| 0 | 10 |
-|Count of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Item | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
-| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
--
-## Next steps
-
-* [Custom text analytics for health overview](../overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
The Language service also provides several new features as well, which can eithe
:::column-end::: :::row-end:::
-### Custom text analytics for health
-
- :::column span="":::
- :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
- :::column-end:::
- :::column span="":::
- [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
- :::column-end:::
## Which Language service feature should I use?
This section will help you decide which Language service feature you should use
| Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | | | Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô| | Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
-| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô | | Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | | | Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## April 2023
-* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md). * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-fine-tune-002<sup>1</sup> | N/A | Currently unavailable | | |
-| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US | N/A | 4,096 | Sep 2021 |
+| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model. <br><sup>2</sup> East US was previously available, but due to high demand this region is currently unavailable for new customers to use for fine-tuning. Please use the South Central US, and West Europe regions for fine-tuning.
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md
az keyvault key delete \
### Delete fine-tuned models and deployments
-The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
+The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest (either with Microsoft-managed keys or customer-managed keys) and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
## Disable customer-managed keys
When you previously enabled customer managed keys this also enabled a system ass
## Next steps * [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
response = openai.Embedding.create(
engine="YOUR_DEPLOYMENT_NAME" ) embeddings = response['data'][0]['embedding']
+print(embeddings)
```
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--|
-| OpenAI resources per region | 2 |
+| OpenAI resources per region within Azure subscription | 2 |
| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 | | Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 |
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
When Teams external users leave the meeting, or the meeting ends, they can no lo
*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md)
-**Inline image support is currently in public preview and is available in the Chat SDK for JavaScript only. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review [Supplemental Terms of Use for Microsoft Azure Previews.](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+**Inline images are images that are copied and pasted directly into the send box of Teams client. For images that were uploaded via "Upload from this device" menu or via drag-and-drop (such as dragging images directly to the send box) in the Teams, they are not supported at this moment. To copy an image, the Teams user can either use their operating system's context menu to copy the image file then paste it into the send box of their Teams client, or use keyboard shortcuts instead.
-**If the Teams external user sends a message with images uploaded via "Upload from this device" menu or via drag-and-drop (such as dragging images directly to the send box) in the Teams, then these scenarios would be covered under the file sharing capability, which is currently not supported.
+**Inline image support is currently in public preview and is available in the Chat SDK for JavaScript only. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review [Supplemental Terms of Use for Microsoft Azure Previews.](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
## Server capabilities
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
Last updated 03/01/2023 - # Spotlight states
In this article, you'll learn how to implement Microsoft Teams spotlight capabil
Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Communication Services or Microsoft 365 users can call the spotlight APIs based
| stopAllSpotlight | ✔️ | ✔️ | | | getSpotlightedParticipants | ✔️ | ✔️ | ✔️ | ## Next steps - [Learn how to manage calls](./manage-calls.md)
communication-services Meeting Interop Features Inline Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-inline-image.md
## Add inline image support The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive inline images sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript.
+The Chat SDK for JavaScript provides `previewUrl` and `url` for each inline images. Please note that some GIF images fetched from `previewUrl` might not be animated and a static preview image would be returned instead. Developers are expected to use the `url` if the intention is to fetch animated images only.
+ [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] [!INCLUDE [Teams Inline Image Interop with JavaScript SDK](./includes/meeting-interop-features-inline-image-javascript.md)]
communication-services Proxy Calling Support Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md
+
+ Title: Tutorial - Proxy your ACS calling traffic across your own servers
+
+description: Learn how to have your media and signaling traffic be proxied to servers that you can control.
++++ Last updated : 04/20/2023+++++
+# How to force calling traffic to be proxied across your own server
+
+In certain situations, it might be useful to have all your client traffic proxied to a server that you can control. When the SDK is initializing, you can provide the details of your servers that you would like the traffic to route to. Once enabled all the media traffic (audio/video/screen sharing) travel through the provided TURN servers instead of the Azure Communication Services defaults. This tutorial guides on how to have WebJS SDK calling traffic be proxied to servers that you control.
+
+>[!IMPORTANT]
+> The proxy feature is available starting in the public preview version [1.13.0-beta.4](https://www.npmjs.com/package/@azure/communication-calling/v/1.13.0-beta.4) of the Calling SDK. Please ensure that you use this or a newer SDK when trying to use this feature. This Quickstart uses the Azure Communication Services Calling SDK version greater than `1.13.0`.
+
+## Proxy calling media traffic
+
+## What is a TURN server?
+Many times, establishing a network connection between two peers isn't straightforward. A direct connection might not work because of many reasons: firewalls with strict rules, peers sitting behind a private network, or computers are running in a NAT environment. To solve these network connection issues, you can use a TURN server. The term stands for Traversal Using Relays around NAT, and it's a protocol for relaying network traffic STUN and TURN servers are the relay servers here. Learn more about how ACS [mitigates](../concepts/network-traversal.md) network challenges by utilizing STUN and TURN.
+
+### Provide your TURN servers details to the SDK
+To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video.
+
+```js
+import { CallClient } from '@azure/communication-calling';
+
+const myTurn1 = {
+ urls: [
+ 'turn:turn.azure.com:3478?transport=udp',
+ 'turn:turn1.azure.com:3478?transport=udp',
+ ],
+ username: 'turnserver1username',
+ credential: 'turnserver1credentialorpass'
+};
+
+const myTurn2 = {
+ urls: [
+ 'turn:20.202.255.255:3478',
+ 'turn:20.202.255.255:3478?transport=tcp',
+ ],
+ username: 'turnserver2username',
+ credential: 'turnserver2credentialorpass'
+};
+
+// While you are creating an instance of the CallClient (the entry point of the SDK):
+const callClient = new CallClient({
+ networkConfiguration: {
+ turn: {
+ iceServers: [
+ myTurn1,
+ myTurn2
+ ]
+ }
+ }
+});
++++
+// ...continue normally with your SDK setup and usage.
+```
+
+> [!IMPORTANT]
+> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type).
+
+> [!NOTE]
+> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default will behaviour will be UDP.
+
+> [!NOTE]
+> If any of the URLs provided are invalid or don't have one of these schemas - 'turn:', 'turns:', 'stun:', the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown should help you troubleshoot if you run into issues.
+
+The API reference for the `CallClientOptions` object, and the `networkConfiguration` property within it can be found here - [CallClientOptions](/javascript/api/azure-communication-services/@azure/communication-calling/callclientoptions?view=azure-communication-services-js&preserve-view=true).
+
+### Set up a TURN server in Azure
+You can create a Linux virtual machine in the Azure portal using this [guide](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu), and deploy a TURN server using [coturn](https://github.com/coturn/coturn), a free and open source implementation of a TURN and STUN server for VoIP and WebRTC.
+
+Once you have setup a TURN server, you can test it using the WebRTC Trickle ICE page - [Trickle ICE](https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/).
+
+## Proxy signaling traffic
+
+To provide the URL of a proxy server, you need to pass it in as part of `CallClientOptions` while initializing the `CallClient`. For more details how to setup a call see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video.
+
+```js
+import { CallClient } from '@azure/communication-calling';
+
+// While you are creating an instance of the CallClient (the entry point of the SDK):
+const callClient = new CallClient({
+ networkConfiguration: {
+ proxy: {
+ url: 'https://myproxyserver.com'
+ }
+ }
+});
+
+// ...continue normally with your SDK setup and usage.
+```
+
+> [!NOTE]
+> If the proxy URL provided is an invalid URL, the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown will help you troubleshoot if you run into issues.
+
+The API reference for the `CallClientOptions` object, and the `networkConfiguration` property within it can be found here - [CallClientOptions](/javascript/api/azure-communication-services/@azure/communication-calling/callclientoptions?view=azure-communication-services-js&preserve-view=true).
+
+### Setting up a signaling proxy middleware in express js
+
+You can also create a proxy middleware in your express js server setup to have all the URLs redirected through it, using the [http-proxy-middleware](https://www.npmjs.com/package/http-proxy-middleware) npm package.
+The `createProxyMiddleware` function from that package should cover what you need for a simple redirect proxy setup. Here's an example usage of it with some option settings that the SDK needs to have all of our URLs working as expected:
+
+```js
+const proxyRouter = (req) => {
+ // Your router function if you don't intend to setup a direct target
+
+ // An example:
+ if (!req.originalUrl && !req.url) {
+ return '';
+ }
+
+ const incomingUrl = req.originalUrl || req.url;
+ if (incomingUrl.includes('/proxy')) {
+ return 'https://microsoft.com/forwarder/';
+ }
+
+ return incomingUrl;
+}
+
+const myProxyMiddleware = createProxyMiddleware({
+ target: 'https://microsoft.com', // This will be ignore if a router function is provided, but createProxyMiddleware still requires this to be passed in (see it's official docs on the npm page for the most recent changes)
+ router: proxyRouter,
+ changeOrigin: true,
+ secure: false, // If you have proper SSL setup, set this accordingly
+ followRedirects: true,
+ ignorePath: true,
+ ws: true,
+ logLevel: 'debug'
+});
+
+// And finally pass in your proxy middleware to your express app depending on your URL/host setup
+app.use('/proxy', myProxyMiddleware);
+```
+
+> [!Tip]
+> If you are having SSL issues, check out the [cors](https://www.npmjs.com/package/cors) package.
+
+### Setting up a signaling proxy server on Azure
+You can create a Linux virtual machine in the Azure portal and deploy an NGINX server on it using this guide - [Quickstart: Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu).
+
+Here's an NGINX config that you could make use of for a quick spin up:
+```
+events {
+ multi_accept on;
+ worker_connections 65535;
+}
+
+http {
+ map $http_upgrade $connection_upgrade {
+ default upgrade;
+ '' close;
+ }
+
+ server {
+ listen <port_you_want_listen_on> ssl;
+ ssl_certificate <path_to_your_ssl_cert>;
+ ssl_certificate_key <path_to_your_ssl_key>;
+ location ~* ^/(.*\.(com)(?::[\d]+)?)/(.*)$ {
+ resolver 8.8.8.8;
+ set $ups_host $1;
+ set $r_uri $3;
+ rewrite ^/.*$ /$r_uri break;
+ proxy_set_header Host $ups_host;
+ proxy_ssl_server_name on;
+ proxy_ssl_protocols TLSv1.2;
+ proxy_ssl_ciphers DEFAULT;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_pass_header Authorization;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection $connection_upgrade;
+ proxy_set_header Proxy "";
+ proxy_pass https://$ups_host;
+ proxy_redirect https://$ups_host https://$host/$ups_host;
+ proxy_intercept_errors on;
+ error_page 301 302 307 = @process_redirect;
+ error_page 400 405 = @process_error_response;
+ if ($request_method = 'OPTIONS') {
+ add_header Access-Control-Allow-Origin * always;
+ }
+ }
+ location @handle_redirect {
+ set $saved_redirect_location '$upstream_http_location';
+ resolver 8.8.8.8;
+ proxy_pass $saved_redirect_location;
+ add_header X-DBUG-MSG "301" always;
+ }
+ location @handle_error_response {
+ add_header Access-Control-Allow-Origin * always;
+ }
+ }
+}
+```
++
container-instances Confidential Containers Attestation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/confidential-containers-attestation-concepts.md
+
+ Title: Attestation in Confidential containers on Azure Containers Instances
+description: full attestation of container groups in confidential containers on Azure Container Instances
+++++ Last updated : 04/20/2023++
+# What is attestation?
+
+Attestation is an essential part of confidential computing and appears in the definition by the Confidential Computing Consortium ΓÇ£Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment."
+
+According to the [Remote ATtestation procedureS (RATS) Architecture](https://www.ietf.org/rfc/rfc9334.html) In remote attestation, ΓÇ£one peer (the "Attester") produces believable information about itself ("Evidence") to enable a remote peer (the "Relying Party") to decide whether to consider that Attester a trustworthy peer. Remote attestation procedures are facilitated by an additional vital party (the "Verifier").ΓÇ¥ In simpler terms, attestation is a way of proving that a computer system is trustworthy.
+
+In Confidential Containers on ACI you can use an attestation token to verify that the container group
+
+- Is running on confidential computing hardware. In this case AMD SEV-SNP.
+- Is running on an Azure compliant utility VM.
+- Is enforcing the expected confidential computing enforcement policy (cce) that was generated using [tooling](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md).
+
+## Full attestation in confidential containers on Azure Container Instances
+
+Expanding upon this concept of attestation. Full attestation captures all the components that are part of the Trusted Execution Environment that is remotely verifiable. To achieve full attestation, in Confidential Containers, we have introduced the notion of a cce policy, which defines a set of rules, which is enforced in the utility VM. The security policy is encoded in the attestation report as an SHA-256 digest stored in the HostData attribute, as provided to the PSP by the host operating system during the VM boot-up. This means that the security policy enforced by the utility VM is immutable throughout the lifetime of the utility VM.
+
+The exhaustive list of attributes that are part of the SEV-SNP attestation can be found [here](https://www.amd.com/system/files/TechDocs/SEV-SNP%20PSP%20API%20Specification.pdf).
+
+Some important fields to consider in an attestation token returned by [Microsoft Azure Attestation ( MAA )](../attestation/overview.md)
+
+| Claim | Sample value | Description |
+||-|-|
+| x-ms-attestation-type | sevsnpvm | String value that describes the attestation type. For example, in this scenario sevsnp hardware |
+| x-ms-compliance-status | azure-compliant-uvm | Compliance status of the utility VM that runs the container group. |
+| x-ms-sevsnpvm-hostdata | 670fff86714a650a49b58fadc1e90fedae0eb32dd51e34931c1e7a1839c08f6f | Hash of the cce policy that was generated during deployment. |
+| x-ms-sevsnpvm-is-debuggable | false | Flag to indicate whether the underlying hardware is running in debug mode |
+
+## Sample attestation token generated by MAA
+
+```json
+{
+ "header": {
+ "alg": "RS256",
+ "jku": "https://sharedeus2.eus2.test.attest.azure.net/certs",
+ "kid": "3bdCYJabzfhISFtb3J8yuEESZwufV7hhh08N3ZflAuE=",
+ "typ": "JWT"
+ },
+ "payload": {
+ "exp": 1680259997,
+ "iat": 1680231197,
+ "iss": "https://sharedeus2.eus2.test.attest.azure.net",
+ "jti": "d288fef5880b1501ea70be1b9366840fd56f74e666a23224d6de113133cbd8d5",
+ "nbf": 1680231197,
+ "nonce": "3413764049005270139",
+ "x-ms-attestation-type": "sevsnpvm",
+ "x-ms-compliance-status": "azure-compliant-uvm",
+ "x-ms-policy-hash": "9NY0VnTQ-IiBriBplVUpFbczcDaEBUwsiFYAzHu_gco",
+ "x-ms-runtime": {
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "Nvhfuq2cCIOAB8XR4Xi9Pr0NP_9CeMzWQGtW_HALz_w",
+ "kty": "RSA",
+ "n": "v965SRmyp8zbG5eNFuDCmmiSeaHpujG2bC_keLSuzvDMLO1WyrUJveaa5bzMoO0pA46pXkmbqHisozVzpiNDLCo6d3z4TrGMeFPf2APIMu-RSrzN56qvHVyIr5caWfHWk-FMRDwAefyNYRHkdYYkgmFK44hhUdtlCAKEv5UQpFZjvh4iI9jVBdGYMyBaKQLhjI5WIh-QG6Za5sSuOCFMnmuyuvN5DflpLFz595Ss-EoBIY-Nil6lCtvcGgR-IbjUYHAOs5ajamTzgeO8kx3VCE9HcyKmyUZsiyiF6IDRp2Bpy3NHTjIz7tmkpTHx7tHnRtlfE2FUv0B6i_QYl_ZA5Q"
+ }
+ ]
+ },
+ "x-ms-sevsnpvm-authorkeydigest": "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ "x-ms-sevsnpvm-bootloader-svn": 3,
+ "x-ms-sevsnpvm-familyId": "01000000000000000000000000000000",
+ "x-ms-sevsnpvm-guestsvn": 2,
+ "x-ms-sevsnpvm-hostdata": "670fff86714a650a49b58fadc1e90fedae0eb32dd51e34931c1e7a1839c08f6f",
+ "x-ms-sevsnpvm-idkeydigest": "cf7e12541981e6cafd150b5236785f4364850e2c4963825f9ab1d8091040aea0964bb9a8835f966bdc174d9ad53b4582",
+ "x-ms-sevsnpvm-imageId": "02000000000000000000000000000000",
+ "x-ms-sevsnpvm-is-debuggable": false,
+ "x-ms-sevsnpvm-launchmeasurement": "a1e1a4b64e8de5c664ceee069010441f74cf039065b5b847e82b9d1a7629aaf33d5591c6b18cee48a4dde481aa88d0fb",
+ "x-ms-sevsnpvm-microcode-svn": 115,
+ "x-ms-sevsnpvm-migration-allowed": false,
+ "x-ms-sevsnpvm-reportdata": "7ab000a323b3c873f5b81bbe584e7c1a26bcf40dc27e00f8e0d144b1ed2d14f10000000000000000000000000000000000000000000000000000000000000000",
+ "x-ms-sevsnpvm-reportid": "a489c8578fb2f54d895fc8d000a85b2ff4855c015e4fb7216495c4dba4598345",
+ "x-ms-sevsnpvm-smt-allowed": true,
+ "x-ms-sevsnpvm-snpfw-svn": 8,
+ "x-ms-sevsnpvm-tee-svn": 0,
+ "x-ms-sevsnpvm-uvm-endorsement": {
+ "x-ms-sevsnpvm-guestsvn": "100",
+ "x-ms-sevsnpvm-launchmeasurement": "a1e1a4b64e8de5c664ceee069010441f74cf039065b5b847e82b9d1a7629aaf33d5591c6b18cee48a4dde481aa88d0fb"
+ },
+ "x-ms-sevsnpvm-vmpl": 0,
+ "x-ms-ver": "1.0"
+ }
+}
+```
+## Generating an attestation token
+
+We have open-sourced sidecar container implementations that provide an easy rest interface to get a raw SNP (Secure Nested Paging) report produced by the hardware or a MAA token. The sidecar is available at this [repository](https://github.com/microsoft/confidential-sidecar-containers) and can be deployed with your container group.
+
+## Next steps
+
+- [Learn how to use attestation to release a secret to your container group](../confidential-computing/skr-flow-confidential-containers-azure-container-instance.md)
+- [Deploy a confidential container group with Azure Resource Manager](./container-instances-tutorial-deploy-confidential-containers-cce-arm.md)
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md
TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query access
Then, run `docker login`, passing `00000000-0000-0000-0000-000000000000` as the username and using the access token as password: ```console
-docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password $TOKEN
+ΓÇó docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN
``` Likewise, you can use the token returned by `az acr login` with the `helm registry login` command to authenticate with the registry:
container-registry Container Registry Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md
As mentioned in the [Manifest digest](container-registry-concepts.md#manifest-di
```output [ {
- "digest": "sha256:7ca0e0ae50c95155dbb0e380f37d7471e98d2232ed9e31eece9f9fb9078f2728",
- "tags": [
- "latest"
- ],
- "timestamp": "2018-07-11T21:38:35.9170967Z"
- },
- {
- "digest": "sha256:d2bdc0c22d78cde155f53b4092111d7e13fe28ebf87a945f94b19c248000ceec",
- "tags": [],
- "timestamp": "2018-07-11T21:32:21.1400513Z"
+ "architecture": "amd64",
+ "changeableAttributes": {
+ "deleteEnabled": true,
+ "listEnabled": true,
+ "quarantineDetails": "{\"state\":\"Scan Passed\",\"link\":\"https://aka.ms/test\",\"scanner\":\"Azure Security Monitoring-Qualys Scanner\",\"result\":{\"version\":\"2020-05-13T00:23:31.954Z\",\"summary\":[{\"severity\":\"High\",\"count\":2},{\"severity\":\"Medium\",\"count\":0},{\"severity\":\"Low\",\"count\":0}]}}",
+ "quarantineState": "Passed",
+ "readEnabled": true,
+ "writeEnabled": true
+ },
+ "configMediaType": "application/vnd.docker.container.image.v1+json",
+ "createdTime": "2020-05-16T04:25:14.3112885Z",
+ "digest": "sha256:eef2ef471f9f9d01fd2ed81bd2492ddcbc0f281b0a6e4edb700fbf9025448388",
+ "imageSize": 22906605,
+ "lastUpdateTime": "2020-05-16T04:25:14.3112885Z",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "os": "linux",
+ "timestamp": "2020-05-16T04:25:14.3112885Z"
} ] ```
-As you can see in the output of the last step in the sequence, there is now an orphaned manifest whose `"tags"` property is an empty list. This manifest still exists within the registry, along with any unique layer data that it references. **To delete such orphaned images and their layer data, you must delete by manifest digest**.
+The tags array is removed from meta-data when an image is **untagged**. This manifest still exists within the registry, along with any unique layer data that it references. **To delete such orphaned images and their layer data, you must delete by manifest digest**.
## Automatically purge tags and manifests
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
In addition to providing incremental data feed from analytical store to diverse
- There's no limitation around the fixed data retention period for which changes are available > [!IMPORTANT]
-> Please note that "from the beginning" means that all data and all transactions since the container creation are availble for CDC, including deletes and updates. To ingest and process deletes and updates, you have to use specific settings in your CDC processes in Azure Synapse or Azure Data Factory. These settings are turned off by default. For more information, click [here](get-started-change-data-capture.md)
+> Please note that if the "Start from beginning" option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. Similarly, when the "Start from timestamp" option is selected, the initial load processes the data from the given timestamp, and incremental or changed data is captured in subsequent runs. The `Capture intermediate updates`, `Capture Deletes` and `Capture Transactional store TTLs`, which are found under the [source options](get-started-change-data-capture.md) tab, determine if intermediate updates and deletes are captured in sinks.
## Features
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Previously updated : 02/27/2023 Last updated : 04/15/2023
Depending on the current RU/s provisioned and resource settings, each resource c
┬╣ Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
-## Control plane operations
+## Control plane
-You can [create and manage your Azure Cosmos DB account](how-to-manage-database-account.md) using the Azure portal, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. The following table lists the limits per subscription, account, and number of operations.
+Azure Cosmos DB maintains a resource provider that offers a management layer to create, update, and delete resources in your Azure Cosmos DB account. The resource provider interfaces with the overall Azure Resource Management layer, which is the deployment and management service for Azure. You can [create and manage Azure Cosmos DB resources](how-to-manage-database-account.md) using the Azure portal, Azure PowerShell, Azure CLI, Azure Resource Manager and Bicep templates, Rest API, Azure Management SDKs as well as 3rd party tools such as Terraform and Pulumi.
+
+This management layer can also be accessed from the Azure Cosmos DB data plane SDKs used in your applications to create and manage resources within an account. Data plane SDKs also make control plane requests during initial connection to the service to do things like enumerating databases and containers, as well as requesting account keys for authentication.
+
+Each account for Azure Cosmos DB has a `master partition` which contains all of the metadata for an account. It also has a small amount of throughput to support control plane operations. Control plane requests that create, read, update or delete this metadata consumes this throughput. When the amount of throughput consumed by control plane operations exceeds this amount, operations are rate-limited, same as data plane operations within Azure Cosmos DB. However, unlike throughput for data operations, throughput for the master partition cannot be increased.
+
+Some control plane operations do not consume master partition throughput, such as Get or List Keys. However, unlike requests on data within your Azure Cosmos DB account, resource providers within Azure are not designed for high request volumes. **Control plane operations that exceed the documented limits at sustained levels over consecutive 5-minute periods here may experience request throttling as well failed or incomplete operations on Azure Cosmos DB resources**.
+
+Control plane operations can be monitored by navigating the Insights tab for an Azure Cosmos DB account. To learn more see [Monitor Control Plane Requests](use-metrics.md#monitor-control-plane-requests). Users can also customize these, use Azure Monitor and create a workbook to monitor [Metadata Requests](monitor-reference.md#request-metrics) and set alerts on them.
+
+### Resource limits
+
+The following table lists resource limits per subscription or account.
| Resource | Limit | | | | | Maximum number of accounts per subscription | 50 by default. ┬╣ |
-| Maximum number of regional failovers | 10/hour by default. ┬╣ ┬▓ |
+| Maximum number of databases & containers per account | 500 ┬▓ |
+| Maximum throughput supported by an account for metadata operations | 240 RU/s |
┬╣ You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md) up to 1,000 max.
+┬▓ This limit cannot be increased. Total count of both with an account. (1 database and 499 containers, 250 databases and 250 containers, etc.)
+
+### Request limits
+
+The following table lists request limits per 5 minute interval, per account, unless otherwise specified.
+
+| Operation | Limit |
+| | |
+| Maximum List or Get Keys | 500 ┬╣ |
+| Maximum Create database & container | 500 |
+| Maximum Get or List database & container | 500 ┬╣ |
+| Maximum Update provisioned throughput | 25 |
+| Maximum regional failover | 10 (per hour) ┬▓ |
+| Maximum number of all operations (PUT, POST, PATCH, DELETE, GET) not defined above | 500 |
-┬▓ Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or have any limits on changing the write region.
+┬╣ Users should use [singleton client](nosql/best-practice-dotnet.md#checklist) for SDK instances and cache keys and database and container references between requests for the lifetime of that instance.
+┬▓ Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or allow changing the write region.
Azure Cosmos DB automatically takes backups of your data at regular intervals. For details on backup retention intervals and windows, see [Online backup and on-demand data restore in Azure Cosmos DB](online-backup-and-restore.md).
Here's a list of limits per account.
| Resource | Limit | | | |
-| Maximum number of databases per account | 500 |
-| Maximum number of containers per database with shared throughput |25 |
-| Maximum number of containers per account | 500 |
+| Maximum number of databases and containers per account | 500 |
+| Maximum number of containers per database with shared throughput | 25 |
| Maximum number of regions | No limit (All Azure regions) | ### Serverless | Resource | Limit | | | |
-| Maximum number of databases per account | 100 |
-| Maximum number of containers per account | 100 |
+| Maximum number of databases and containers per account | 100 |
| Maximum number of regions | 1 (Any Azure region) | ## Per-container limits
Azure Cosmos DB supports [CRUD and query operations](/rest/api/cosmos-db/) again
| Maximum response size (for example, paginated query) | 4 MB | | Maximum number of operations in a transactional batch | 100 |
+Azure Cosmos DB supports execution of triggers during writes. The service supports a maximum of one pre-trigger and one post-trigger per write operation.
+ Once an operation like query reaches the execution timeout or response size limit, it returns a page of results and a continuation token to the client to resume execution. There's no practical limit on the duration a single query can run across pages/continuations. Azure Cosmos DB uses HMAC for authorization. You can use either a primary key, or a [resource token](secure-access-to-data.md) for fine-grained access control to resources. These resources can include containers, partition keys, or items. The following table lists limits for authorization tokens in Azure Cosmos DB.
Azure Cosmos DB uses HMAC for authorization. You can use either a primary key, o
┬╣ You can increase it by [filing an Azure support ticket](create-support-request-quota-increase.md)
-Azure Cosmos DB supports execution of triggers during writes. The service supports a maximum of one pre-trigger and one post-trigger per write operation.
-
-## Metadata request limits
-
-Azure Cosmos DB maintains system metadata for each account. This metadata allows you to enumerate collections, databases, other Azure Cosmos DB resources, and their configurations for free of charge.
-
-| Resource | Limit |
-| | |
-|Maximum collection create rate per minute| 100|
-|Maximum Database create rate per minute| 100|
-|Maximum provisioned throughput update rate per minute| 5|
-|Maximum throughput supported by an account for metadata operations | 240 RU/s |
- ## Limits for autoscale provisioned throughput See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article and [FAQ](autoscale-faq.yml#lowering-the-max-ru-s) for more detailed explanation of the throughput and storage limits with autoscale.
The following table lists the limits specific to MongoDB feature support. Other
| Resource | Limit | | | |
+| Maximum size of a document | 16 MB (UTF-8 length of JSON representation) ┬╣ |
| Maximum MongoDB query memory size (This limitation is only for 3.2 server version) | 40 MB | | Maximum execution time for MongoDB operations (for 3.2 server version)| 15 seconds| | Maximum execution time for MongoDB operations (for 3.6 and 4.0 server version)| 60 seconds| | Maximum level of nesting for embedded objects / arrays on index definitions | 6 |
-| Idle connection timeout for server side connection closure ┬╣ | 30 minutes |
+| Idle connection timeout for server side connection closure ┬▓ | 30 minutes |
+
+┬╣ Large document sizes up to 16 MB require feature enablement in Azure portal. Read the [feature documentation](../cosmos-db/mongodb/feature-support-42.md#data-types) to learn more.
-┬╣ We recommend that client applications set the idle connection timeout in the driver settings to 2-3 minutes because the [default timeout for Azure LoadBalancer is 4 minutes](../load-balancer/load-balancer-tcp-idle-timeout.md). This timeout ensures that an intermediate load balancer idle doesn't close connections between the client machine and Azure Cosmos DB.
+┬▓ We recommend that client applications set the idle connection timeout in the driver settings to 2-3 minutes because the [default timeout for Azure LoadBalancer is 4 minutes](../load-balancer/load-balancer-tcp-idle-timeout.md). This timeout ensures that an intermediate load balancer idle doesn't close connections between the client machine and Azure Cosmos DB.
## Try Azure Cosmos DB Free limits
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
except exceptions.CosmosResourceExistsError:
print('A container with already exists') ```
-## Optional - Disable analytical store in a SQL API container
+## Optional - Disable analytical store
-Analytical store can be disabled in SQL API containers using Azure CLI or PowerShell, by setting `analytical TTL` to `0`.
+Analytical store can be disabled in SQL API containers or in MongoDB API collections, using Azure CLI or PowerShell. It is done by setting `analytical TTL` to `0`.
> [!NOTE] > Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled. > [!NOTE]
-> Please note that disabling analytical store is not available for MongoDB API collections.
-
+> Please note that currently it is not possible to disable Synapse Link from a database account.
## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
Previously updated : 03/08/2023 Last updated : 04/14/2023
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-This article describes how to manage various tasks on an Azure Cosmos DB account by using the Azure portal.
+This article describes how to manage various tasks on an Azure Cosmos DB account by using the Azure portal. Azure Cosmos DB can also be managed with other Azure management clients including [Azure PowerShell](manage-with-powershell.md), [Azure CLI](nosql/manage-with-cli.md), [Azure Resource Manager templates](./manage-with-templates.md), [Bicep](nosql/manage-with-bicep.md), and [Terraform](nosql/samples-terraform.md).
> [!TIP]
-> Azure Cosmos DB can also be managed with other Azure management clients including [Azure PowerShell](manage-with-powershell.md), [Azure CLI](sql/manage-with-cli.md), [Azure Resource Manager templates](./manage-with-templates.md), and [Bicep](sql/manage-with-bicep.md).
+> The management API for Azure Cosmos DB or *control plane* is not designed for high request volumes like the rest of the service. To learn more see [Control Plane Service Limits](concepts-limits.md#control-plane)
## Create an account
After an Azure Cosmos DB account is configured for service-managed failover, the
:::image type="content" source="./media/how-to-manage-database-account/manual-failover.png" alt-text="Screenshot of the manual failover portal menu."::: + ## Next steps For more information and examples on how to manage the Azure Cosmos DB account as well as databases and containers, read the following articles:
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Title: Configure role-based access control with Azure AD
description: Learn how to configure role-based access control with Azure Active Directory for your Azure Cosmos DB account ++ Last updated : 04/14/2023 -- Previously updated : 02/27/2023
The Azure Cosmos DB data plane role-based access control is built on concepts th
## <a id="permission-model"></a> Permission model > [!IMPORTANT]
-> This permission model covers only database operations that involve reading and writing data. It does *not* cover any kind of management operations on management resources, for example:
->
+> This permission model covers only database operations that involve reading and writing data. It **does not** cover any kind of management operations on management resources, including:
> - Create/Replace/Delete Database > - Create/Replace/Delete Container
-> - Replace Container Throughput
+> - Read/Replace Container Throughput
> - Create/Replace/Delete/Read Stored Procedures > - Create/Replace/Delete/Read Triggers > - Create/Replace/Delete/Read User Defined Functions
The Azure Cosmos DB SDKs issue read-only metadata requests during initialization
- The partition key of your containers or their indexing policy. - The list of physical partitions that make a container and their addresses.
-They don't* fetch any of the data that you've stored in your account.
+They **do not** fetch any of the data that you've stored in your account.
To ensure the best transparency of our permission model, these metadata requests are explicitly covered by the `Microsoft.DocumentDB/databaseAccounts/readMetadata` action. This action should be allowed in every situation where your Azure Cosmos DB account is accessed through one of the Azure Cosmos DB SDKs. It can be assigned (through a role assignment) at any level in the Azure Cosmos DB hierarchy (that is, account, database, or container).
The actual metadata requests allowed by the `Microsoft.DocumentDB/databaseAccoun
| Database | &bull; Reading database metadata <br /> &bull; Listing the containers under the database <br /> &bull; For each container under the database, the allowed actions at the container scope | | Container | &bull; Reading container metadata <br /> &bull; Listing physical partitions under the container <br /> &bull; Resolving the address of each physical partition |
+> [!IMPORTANT]
+> Throughput is not included in the metadata for this action.
+ ## Built-in role definitions Azure Cosmos DB exposes two built-in role definitions:
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
Previously updated : 02/27/2023 Last updated : 04/20/2023
-# Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB for MongoDB
+# Premigration steps for data migrations from MongoDB to Azure Cosmos DB for MongoDB
[!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
Your goal in pre-migration is to:
Follow these steps to perform a thorough pre-migration
-1. [Discover your existing MongoDB resources and create a data estate spreadsheet to track them](#pre-migration-discovery)
-2. [Assess the readiness of your existing MongoDB resources for data migration](#pre-migration-assessment)
-3. [Map your existing MongoDB resources to new Azure Cosmos DB resources](#pre-migration-mapping)
-4. [Plan the logistics of migration process end-to-end, before you kick off the full-scale data migration](#execution-logistics)
+1. [Discover your existing MongoDB resources and Assess the readiness of your existing MongoDB resources for data migration](#pre-migration-assessment)
+2. [Map your existing MongoDB resources to new Azure Cosmos DB resources](#pre-migration-mapping)
+3. [Plan the logistics of migration process end-to-end, before you kick off the full-scale data migration](#execution-logistics)
Then, execute your migration in accordance with your pre-migration plan.
All of the above steps are critical for ensuring a successful migration.
When you plan a migration, we recommend that whenever possible you plan at the per-resource level.
-The [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB)(DMA) assists you with the [Discovery](#programmatic-discovery-using-the-database-migration-assistant) and [Assessment](#programmatic-assessment-using-the-database-migration-assistant) stages of the planning.
-
-## Pre-migration discovery
-
-The first pre-migration step is resource discovery. In this step, you need to create a **data estate migration spreadsheet**.
+## Pre-migration assessment
-* This sheet contains a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
-* The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end.
-* You're recommended to extend this document and use it as a tracking document throughout the migration process.
+The first pre-migration step is to discover your existing MongoDB resources and assess the readiness of your resources for migration.
-### Programmatic discovery using the Database Migration Assistant
+Discovery involves creating a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
-You may use the [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
+Assessment involves finding out whether you're using the [features and syntax that are supported](./feature-support-42.md). It also includes making sure you're adhering to the [limits and quotas](../concepts-limits.md#per-account-limits). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
-It's easy to [setup and run DMA](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
+There are 3 ways to complete the pre-migration assessment, we recommend you to use the [Azure Cosmos DB Migration for MongoDB extension](#azure-cosmos-db-migration-for-mongodb-extension).
-You can use either one of the following DMA output files as the data estate migration spreadsheet:
+### Azure Cosmos DB Migration for MongoDB extension
-* `workload_database_details.csv` - Gives a database-level view of the source workload. Columns in file are: Database Name, Collection count, Document Count, Average Document Size, Data Size, Index Count and Index Size.
-* `workload_collection_details.csv` - Gives a collection-level view of the source workload. Columns in file are: Database Name, Collection Name, Doc Count, Average Document Size, Data size, Index Count, Index Size and Index definitions.
+The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
-Here's a sample database-level migration spreadsheet created by DMA:
-| DB Name | Collection Count | Doc Count | Avg Doc Size | Data Size | Index Count | Index Size |
-| | | | | | | |
-| `bookstoretest` | 2 | 192200 | 4144 | 796572532 | 7 | 260636672 |
-| `cosmosbookstore` | 1 | 96604 | 4145 | 400497620 | 1 | 1814528 |
-| `geo` | 2 | 25554 | 252 | 6446542 | 2 | 266240 |
-| `kagglemeta` | 2 | 87934912 | 190 | 16725184704 | 2 | 891363328 |
-| `pe_orig` | 2 | 57703820 | 668 | 38561434711 | 2 | 861605888 |
-| `portugeseelection` | 2 | 30230038 | 687 | 20782985862 | 1 | 450932736 |
-| `sample_mflix` | 5 | 75583 | 691 | 52300763 | 5 | 798720 |
-| `test` | 1 | 22 | 545 | 12003 | 0 | 0 |
-| `testcol` | 26 | 46 | 88 | 4082 | 32 | 589824 |
-| `testhav` | 3 | 2 | 528 | 1057 | 3 | 36864 |
-| **TOTAL:** | **46** | **176258781** | | **72.01 GB** | | **2.3 GB** |
+> [!NOTE]
+> Azure Cosmos DB Migration for MongoDB extension does not perform an end-to-end assessment. We recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
-### Manual discovery
+### Manual discovery (legacy)
-Alternately, you may refer to the sample spreadsheet in this guide and create a similar document yourself.
+Alternatively, you could create a **data estate migration spreadsheet**. The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end and use it as a tracking document throughout the migration process.
+* This sheet contains a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
* The spreadsheet should be structured as a record of your data estate resources, in list form. * Each row corresponds to a resource (database or collection). * Each column corresponds to a property of the resource; start with at least *name* and *data size (GB)* as columns.
Here are some tools you can use for discovering resources:
* [MongoDB Shell](https://www.mongodb.com/try/download/shell) * [MongoDB Compass](https://www.mongodb.com/try/download/compass)
-## Pre-migration assessment
-
-Second, as a prelude to planning your migration, assess the readiness of resources in your data estate for migration.
-
-Assessment involves finding out whether you're using the [features and syntax that are supported](./feature-support-42.md). It also includes making sure you're adhering to the [limits and quotas](../concepts-limits.md#per-account-limits). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
-
-### Programmatic assessment using the Database Migration Assistant
+Go through the spreadsheet and verify each collection against the [supported features and syntax](./feature-support-42.md), and [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail.
-[Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) also assists you with the assessment stage of pre-migration planning.
-
-Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
-
-The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
-
-The results are printed as an output in the DMA notebook and saved to a CSV file - `assessment_result.csv`.
+### Database Migration Assistant utility (legacy)
> [!NOTE]
-> Database Migration Assistant is a preliminary utility meant to assist you with the pre-migration steps. It does not perform an end-to-end assessment.
-> In addition to running the DMA, we also recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
+> Database Migration Assistant is a legacy utility meant to assist you with the pre-migration steps. We recommend you to use the [Azure Cosmos DB Migration for MongoDB extension](#azure-cosmos-db-migration-for-mongodb-extension) for all pre-migration steps.
+
+You may use the [Database Migration Assistant](programmatic-database-migration-assistant-legacy.md) (DMA) utility to assist you with pre-migration steps.
## Pre-migration mapping
cosmos-db Programmatic Database Migration Assistant Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/programmatic-database-migration-assistant-legacy.md
+
+ Title: Database Migration Assistant utility
+
+description: This doc provides an overview of the Database Migration Assistant legacy utility.
++++++ Last updated : 04/20/2023++
+# Database Migration Assistant utility (legacy)
++
+> [!IMPORTANT]
+> Database Migration Assistant is a preliminary legacy utility meant to assist you with the pre-migration steps. Microsoft recommends you to use the [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) for all pre-migration steps.
+
+### Programmatic discovery using the Database Migration Assistant
+
+You may use the [Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
+
+It's easy to [setup and run DMA](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
+
+You can use either one of the following DMA output files as the data estate migration spreadsheet:
+
+* `workload_database_details.csv` - Gives a database-level view of the source workload. Columns in file are: Database Name, Collection count, Document Count, Average Document Size, Data Size, Index Count and Index Size.
+* `workload_collection_details.csv` - Gives a collection-level view of the source workload. Columns in file are: Database Name, Collection Name, Doc Count, Average Document Size, Data size, Index Count, Index Size and Index definitions.
+
+Here's a sample database-level migration spreadsheet created by DMA:
+
+| DB Name | Collection Count | Doc Count | Avg Doc Size | Data Size | Index Count | Index Size |
+| | | | | | | |
+| `bookstoretest` | 2 | 192200 | 4144 | 796572532 | 7 | 260636672 |
+| `cosmosbookstore` | 1 | 96604 | 4145 | 400497620 | 1 | 1814528 |
+| `geo` | 2 | 25554 | 252 | 6446542 | 2 | 266240 |
+| `kagglemeta` | 2 | 87934912 | 190 | 16725184704 | 2 | 891363328 |
+| `pe_orig` | 2 | 57703820 | 668 | 38561434711 | 2 | 861605888 |
+| `portugeseelection` | 2 | 30230038 | 687 | 20782985862 | 1 | 450932736 |
+| `sample_mflix` | 5 | 75583 | 691 | 52300763 | 5 | 798720 |
+| `test` | 1 | 22 | 545 | 12003 | 0 | 0 |
+| `testcol` | 26 | 46 | 88 | 4082 | 32 | 589824 |
+| `testhav` | 3 | 2 | 528 | 1057 | 3 | 36864 |
+| **TOTAL:** | **46** | **176258781** | | **72.01 GB** | | **2.3 GB** |
+
+### Programmatic assessment using the Database Migration Assistant
+
+[Database Migration Assistant](https://github.com/AzureCosmosDB/Cosmos-DB-Migration-Assistant-for-API-for-MongoDB) (DMA) also assists you with the assessment stage of pre-migration planning.
+
+Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
+
+The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
+
+The results are printed as an output in the DMA notebook and saved to a CSV file - `assessment_result.csv`.
+
+> [!NOTE]
+> Database Migration Assistant does not perform an end-to-end assessment. It is a preliminary utility meant to assist you with the pre-migration steps.
+
cosmos-db Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-reference.md
Previously updated : 12/07/2020 Last updated : 04/14/2023
All the metrics corresponding to Azure Cosmos DB are stored in the namespace **A
|Metric (Metric Display Name)|Unit (Aggregation Type) |Description|Dimensions| Time granularities| Legacy metric mapping | Usage | ||||| | | | | TotalRequests (Total Requests) | Count (Count) | Number of requests made| DatabaseName, CollectionName, Region, StatusCode| All | TotalRequests, Http 2xx, Http 3xx, Http 400, Http 401, Internal Server error, Service Unavailable, Throttled Requests, Average Requests per Second | Used to monitor requests per status code, container at a minute granularity. To get average requests per second, use Count aggregation at minute and divide by 60. |
-| MetadataRequests (Metadata Requests) |Count (Count) | Count of metadata requests. Azure Cosmos DB maintains system metadata container for each account, that allows you to enumerate collections, databases, etc., and their configurations, free of charge. | DatabaseName, CollectionName, Region, StatusCode| All| |Used to monitor throttles due to metadata requests.|
+| MetadataRequests (Metadata Requests) |Count (Count) | Count of Azure Resource Manager metadata requests. Metadata has request limits. See [Control Plane Limits](concepts-limits.md#control-plane) for more information. | DatabaseName, CollectionName, Region, StatusCode| All | | Used to monitor metadata requests in scenarios where requests are being throttled. See [Monitor Control Plane Requests](use-metrics.md#monitor-control-plane-requests) for more information. |
| MongoRequests (Mongo Requests) | Count (Count) | Number of Mongo Requests Made | DatabaseName, CollectionName, Region, CommandName, ErrorCode| All |Mongo Query Request Rate, Mongo Update Request Rate, Mongo Delete Request Rate, Mongo Insert Request Rate, Mongo Count Request Rate| Used to monitor Mongo request errors, usages per command type. | ### Request Unit metrics
The following table lists the properties of resource logs in Azure Cosmos DB. Th
| **duration** | **duration_d** | The duration of the operation, in milliseconds. | | **requestLength** | **requestLength_s** | The length of the request, in bytes. | | **responseLength** | **responseLength_s** | The length of the response, in bytes.|
-| **resourceTokenPermissionId** | **resourceTokenPermissionId_s** | This property indicates the resource token permission Id that you have specified. To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
+| **resourceTokenPermissionId** | **resourceTokenPermissionId_s** | This property indicates the resource token permission ID that you have specified. To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. |
| **resourceTokenPermissionMode** | **resourceTokenPermissionMode_s** | This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read". To learn more about permissions, see the [Secure access to your data](./secure-access-to-data.md#permissions) article. | | **resourceTokenUserRid** | **resourceTokenUserRid_s** | This value is non-empty when [resource tokens](./secure-access-to-data.md#resource-tokens) are used for authentication. The value points to the resource ID of the user. | | **responseLength** | **responseLength_s** | The length of the response, in bytes.|
cosmos-db Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md
As detailed in the [introduction](#available-connectivity-modes), Direct mode cl
### Routing
-When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node. It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes and both are considered [metadata](../concepts-limits.md#metadata-request-limits). Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
+When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node. It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes and both are considered [Control Plane metadata](../concepts-limits.md#control-plane). Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
Each replica set contains one primary replica and three secondaries. Write operations are always routed to primary replica nodes while read operations can be served from primary or secondary nodes.
There are two factors that dictate the number of TCP connections the SDK will op
Each established connection can serve a configurable number of concurrent operations. If the volume of concurrent operations exceeds this threshold, new connections will be open to serve them, and it's possible that for a physical partition, the number of open connections exceeds the steady state number. This behavior is expected for workloads that might have spikes in their operational volume. For the .NET SDK this configuration is set by [CosmosClientOptions.MaxRequestsPerTcpConnection](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxrequestspertcpconnection), and for the Java SDK you can customize using [DirectConnectionConfig.setMaxRequestsPerConnection](/java/api/com.azure.cosmos.directconnectionconfig.setmaxrequestsperconnection).
-By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and effect overall performance.
+By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and affect overall performance.
### Language specific implementation details
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-rate-too-large.md
Follow the guidance in [Step 1](#step-1-check-the-metrics-to-determine-the-perce
Another common question that arises is, **Why is normalized RU consumption 100%, but autoscale didn't scale to the max RU/s?** This typically occurs for workloads that have temporary or intermittent spikes of usage. When you use autoscale, Azure Cosmos DB only scales the RU/s to the maximum throughput when the normalized RU consumption is 100% for a sustained, continuous period of time in a 5-second interval. This is done to ensure the scaling logic is cost friendly to the user, as it ensures that single, momentary spikes to not lead to unnecessary scaling and higher cost. When there are momentary spikes, the system typically scales up to a value higher than the previously scaled to RU/s, but lower than the max RU/s. Learn more about how to [interpret the normalized RU consumption metric with autoscale](../monitor-normalized-request-units.md#normalized-ru-consumption-and-autoscale).+ ## Rate limiting on metadata requests Metadata rate limiting can occur when you're performing a high volume of metadata operations on databases and/or containers. Metadata operations include:
Metadata rate limiting can occur when you're performing a high volume of metadat
- List databases or containers in an Azure Cosmos DB account - Query for offers to see the current provisioned throughput
-There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits).
+There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [Control Plane Service Limits](../concepts-limits.md#control-plane).
#### How to investigate Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md
Previously updated : 03/13/2023 Last updated : 04/14/2023
IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
*QueryMetrics* provides details on how long each component of the query took to execute. The most common root cause for long running queries is scans, meaning the query was unable to apply the indexes. This problem can be resolved with a better filter condition.
+## Monitor control plane requests
+
+Azure Cosmos DB applies limits on the number of metadata requests that can be made over consecutive 5 minute intervals. Control plane requests which go over these limits may experience throttling. Metadata requests may in some cases, consume throughput against a `master partition` within an account that contains all of an account's metadata. Control plane requests which go over the throughput amount will experience rate limiting (429s).
+
+To get started, head to the [Azure portal](https://portal.azure.com) and navigate to the **Insights** pane. From this pane, open the **System** tab. The System tab shows two charts. One that shows all metadata requests for an account. The second shows metadata requests throughput consumption from the account's `master partition` that stores an account's metadata.
+++
+The Metadata Request by Status Code graph above aggregates requests at increasing larger granularity as you increase the Time Range. The largest Time Range you can use for a 5 minute time bin is 4 hours. To monitor metadata requests over a greater time range with specific granularity, use Azure Metrics. Create a new chart and select Metadata requests metric. In the upper right corner select 5 minutes for Time granularity as seen below. Metrics also allow for users to [Create Alerts](create-alerts.md) on them which makes them more useful than Insights.
+++ ## Next steps You might want to learn more about improving database performance by reading the following articles:
cost-management-billing Reservation Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-application.md
Read the following articles that apply to you to learn how discounts apply to a
- [Azure SQL Edge](discount-sql-edge.md) - [Database for MariaDB](understand-reservation-charges-mariadb.md) - [Database for MySQL](understand-reservation-charges-mysql.md)-- [Database for PostgreSQL](understand-reservation-charges-postgresql.md)
+- [Database for PostgreSQL](../../postgresql/single-server/concept-reserved-pricing.md)
- [Databricks](reservation-discount-databricks.md) - [Data Explorer](understand-azure-data-explorer-reservation-charges.md) - [Dedicated Hosts](billing-understand-dedicated-hosts-reservation-charges.md)
cost-management-billing Understand Reservation Charges Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reservation-charges-postgresql.md
- Title: Understand reservation discount - Azure Database for PostgreSQL Single server
-description: Learn how a reservation discount is applied to Azure Database for PostgreSQL Single servers.
----- Previously updated : 12/06/2022--
-# How a reservation discount is applied to Azure Database for PostgreSQL Single server
-
-After you buy an Azure Database for PostgreSQL Single server reserved capacity, the reservation discount is automatically applied to PostgreSQL Single servers databases that match the attributes and quantity of the reservation. A reservation covers only the compute costs of your Azure Database for PostgreSQL Single server. You're charged for storage and networking at the normal rates.
-
-## How reservation discount is applied
-
-A reservation discount is ***use-it-or-lose-it***. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
-
-When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
-
-Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
-
-## Discount applied to Azure Database for PostgreSQL Single server
-
-The Azure Database for PostgreSQL Single server reserved capacity discount is applied to running your PostgreSQL Single server on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running Azure Database for PostgreSQL Single server. For PostgreSQL Single servers that don't run the full hour, the reservation is automatically applied to other Azure Database for PostgreSQL Single server matching the reservation attributes. The discount can apply to Azure Database for PostgreSQL Single servers that are running concurrently. If you don't have a PostgreSQL Single server that run for the full hour that matches the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
-
-The following examples show how the Azure Database for PostgreSQL Single server reserved capacity discount applies depending on the number of cores you bought, and when they're running.
-
-* **Example 1**: You buy an Azure Database for PostgreSQL Single server reserved capacity for an 8 vCore. If you are running a 16 vCore Azure Database for PostgreSQL Single server that matches the rest of the attributes of the reservation, you're charged the pay-as-you-go price for 8 vCore of your PostgreSQL Single server compute usage and you get the reservation discount for one hour of 8 vCore PostgreSQL Single server compute usage.</br>
-
-For the rest of these examples, assume that the Azure Database for PostgreSQL Single server reserved capacity you buy is for a 16 vCore Azure Database for PostgreSQL Single server and the rest of the reservation attributes match the running PostgreSQL Single servers.
-
-* **Example 2**: You run two Azure Database for PostgreSQL Single servers with 8 vCore each for an hour. The 16 vCore reservation discount is applied to compute usage for both the 8 vCore Azure Database for PostgreSQL Single server.
-
-* **Example 3**: You run one 16 vCore Azure Database for PostgreSQL Single server from 1 pm to 1:30 pm. You run another 16 vCore Azure Database for PostgreSQL Single server from 1:30 to 2 pm. Both are covered by the reservation discount.
-
-* **Example 4**: You run one 16 vCore Azure Database for PostgreSQL Single server from 1 pm to 1:45 pm. You run another 16 vCore Azure Database for PostgreSQL Single server from 1:30 to 2 pm. You're charged the pay-as-you-go price for the 15-minute overlap. The reservation discount applies to the compute usage for the rest of the time.
-
-To understand and view the application of your Azure Reservations in billing usage reports, see [Understand Azure reservation usage](./understand-reserved-instance-usage-ea.md).
-
-## Next steps
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Create Sql License Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/create-sql-license-assignments.md
Title: Create SQL Server license assignments for Azure Hybrid Benefit
description: This article explains how to create SQL Server license assignments for Azure Hybrid Benefit. Previously updated : 04/06/2023 Last updated : 04/20/2023
# Create SQL Server license assignments for Azure Hybrid Benefit
-The new centralized Azure Hybrid Benefit (preview) experience in the Azure portal supports SQL Server license assignments at the account level or at a particular subscription level. When the assignment is created at the account level, Azure Hybrid Benefit discounts are automatically applied to SQL resources in all subscriptions in the account up to the license count specified in the assignment.
-
-> [!IMPORTANT]
-> Centrally-managed Azure Hybrid Benefit is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+The centralized Azure Hybrid Benefit experience in the Azure portal supports SQL Server license assignments at the account level or at a particular subscription level. When the assignment is created at the account level, Azure Hybrid Benefit discounts are automatically applied to SQL resources in all the account's subscriptions, up to the license quantity specified in the assignment.
For each license assignment, a scope is selected and then licenses are assigned to the scope. Each scope can have multiple license entries.
+Here's a video demonstrating how [centralized Azure Hybrid Benefit works](https://www.youtube.com/watch?v=LvtUXO4wcjs).
+ ## Prerequisites The following prerequisites must be met to create SQL Server license assignments.
The following prerequisites must be met to create SQL Server license assignments
- Your organization has a supported agreement type and supported offer. - You're a member of a role that has permissions to assign SQL licenses. - Your organization has SQL Server core licenses with Software Assurance or core subscription licenses available to assign to Azure.-- Your organization is enrolled to automatic registration of the Azure SQL VMs with the IaaS extension. To learn more, see [Automatic registration with SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms).
+- Your organization is already enrolled to automatic registration of the Azure SQL VMs with the IaaS extension. To learn more, see [SQL IaaS extension registration options for Cost Management administrators](sql-iaas-extension-registration.md).
> [!IMPORTANT] > Failure to meet this prerequisite will cause Azure to produce incomplete data about your current Azure Hybrid Benefit usage. This situation could lead to incorrect license assignments and might result in unnecessary pay-as-you-go charges for SQL Server licenses.
In the following procedure, you navigate from **Cost Management + Billing** to *
:::image type="content" source="./media/create-sql-license-assignments/select-billing-profile.png" alt-text="Screenshot showing billing profile selection." lightbox="./media/create-sql-license-assignments/select-billing-profile.png" ::: 1. In the left menu, select **Reservations + Hybrid Benefit**. :::image type="content" source="./media/create-sql-license-assignments/select-reservations.png" alt-text="Screenshot showing Reservations + Hybrid Benefit selection." :::
-1. Select **Add** and then in the list, select **Azure Hybrid Benefit (Preview)**.
+1. Select **Add** and then in the list, select **Azure Hybrid Benefit**.
:::image type="content" source="./media/create-sql-license-assignments/select-azure-hybrid-benefit.png" alt-text="Screenshot showing Azure Hybrid Benefit selection." lightbox="./media/create-sql-license-assignments/select-azure-hybrid-benefit.png" :::
-1. On the next screen, select **Begin to assign licenses**.
- :::image type="content" source="./media/create-sql-license-assignments/get-started-centralized.png" alt-text="Screenshot showing Add SQL hybrid benefit selection" lightbox="./media/create-sql-license-assignments/get-started-centralized.png" :::
+1. On the next screen, select **Assign licenses**.
+ :::image type="content" source="./media/create-sql-license-assignments/get-started-centralized.png" alt-text="Screenshot showing Get started with Centrally Managed Azure Hybrid Benefit selection." lightbox="./media/create-sql-license-assignments/get-started-centralized.png" :::
If you don't see the page, and instead see the message `You are not the Billing Admin on the selected billing scope` then you don't have the required permission to assign a license. If so, you need to get the required permission. For more information, see [Prerequisites](#prerequisites).
-1. Choose a scope and then enter the license count to use for each SQL Server edition. If you don't have any licenses to assign for a specific edition, enter zero.
- > [!NOTE]
- > You are accountable to determine that the entries that you make in the scope-level managed license experience are accurate and will satisfy your licensing obligations. The license usage information is shown to assist you as you make your license assignments. However, the information shown could be incomplete or inaccurate due to various factors.
- >
- > If the number of licenses that you enter is less than what you are currently using, you'll see a warning message stating _You've entered fewer licenses than you're currently using for Azure Hybrid Benefit in this scope. Your bill for this scope will increase._
+1. Choose the scope and coverage option for the number of qualifying licenses that you want to assign.
+1. Select the date that you want to review the license assignment. For example, you might set it to the agreement renewal or anniversary date. Or you might set it to the subscription renewal date for the source of the licenses.
:::image type="content" source="./media/create-sql-license-assignments/select-assignment-scope-edition.png" alt-text="Screenshot showing scope selection and number of licenses." lightbox="./media/create-sql-license-assignments/select-assignment-scope-edition.png" :::
-1. Optionally, select the **Usage details** tab to view your current Azure Hybrid Benefit usage enabled at the resource scope.
- :::image type="content" source="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" alt-text="Screenshot showing Usage tab details." lightbox="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" :::
+1. Optionally, select **See usage details** to view your current Azure Hybrid Benefit usage enabled at the resource scope.
+ :::image type="content" source="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" alt-text="Screenshot showing the Usage details tab." lightbox="./media/create-sql-license-assignments/select-assignment-scope-edition-usage.png" :::
1. Select **Add**.
-1. Optionally, change the default license assignment name. The review date is automatically set to a year ahead and can't be changed. Its purpose is to remind you to periodically review your license assignments.
+1. Optionally, change the default license assignment name. The review date is automatically set to a year but you can change it. Its purpose is to remind you to periodically review your license assignments.
:::image type="content" source="./media/create-sql-license-assignments/license-assignment-commit.png" alt-text="Screenshot showing default license assignment name." lightbox="./media/create-sql-license-assignments/license-assignment-commit.png" ::: 1. After you review your choices, select **Next: Review + apply**.
-1. Select the **By selecting &quot;Apply&quot;** attestation option to confirm that you have authority to apply Azure Hybrid Benefit, enough SQL Server licenses, and that you'll maintain the licenses as long as they're assigned.
+1. Select the **By selecting &quot;Apply&quot;** attestation option to confirm that you have authority to apply Azure Hybrid Benefit, enough SQL Server licenses, and that you maintain the licenses as long as they're assigned.
:::image type="content" source="./media/create-sql-license-assignments/confirm-apply-attestation.png" alt-text="Screenshot showing the attestation option." lightbox="./media/create-sql-license-assignments/confirm-apply-attestation.png" :::
-1. Select **Apply** and then select **Yes.**
+1. At the bottom of the page, select **Apply** and then select **Yes.**
1. The list of assigned licenses is shown. :::image type="content" source="./media/create-sql-license-assignments/assigned-licenses.png" alt-text="Screenshot showing the list of assigned licenses." lightbox="./media/create-sql-license-assignments/assigned-licenses.png" :::
Under **Last Day Utilization** or **7-day Utilization**, select a percentage, wh
:::image type="content" source="./media/create-sql-license-assignments/assignment-utilization-view.png" alt-text="Screenshot showing assignment usage details." lightbox="./media/create-sql-license-assignments/assignment-utilization-view.png" :::
-If a license assignment's usage is 100%, then it's likely some resources within the scope are incurring pay-as-you-go charges for SQL Server. We recommend that you use the license assignment experience again to review how much usage is being covered or not by assigned licenses. Afterward, go through the same process as before, including consulting your procurement or software asset management department, confirming that more licenses are available, and assigning the licenses.
+If a license assignment's usage is 100%, then it's likely some resources within the scope are incurring pay-as-you-go charges for SQL Server. We recommend that you use the license assignment experience again to review how much usage is being covered or not by assigned licenses. Afterward, go through the same process as before. That includes consulting your procurement or software asset management department, confirming that more licenses are available, and assigning the licenses.
## Changes after license assignment After you create SQL license assignments, your experience with Azure Hybrid Benefit changes in the Azure portal. -- Any existing Azure Hybrid Benefit elections configured for individual SQL resources no longer apply. They're replaced by the SQL license assignment created at the subscription or account level.
+- Any existing Azure Hybrid Benefit elections configured for individual SQL resources no longer apply. The SQL license assignment created at the subscription or account level replaces them.
- The hybrid benefit option isn't shown as in your SQL resource configuration. - Applications or scripts that configure the hybrid benefit programmatically continue to work, but the setting doesn't have any effect.-- SQL software discounts are applied to the SQL resources in the scope. The scope is based on the number of licenses in the license assignments that are created for the subscription for the account where the resource was created.-- A specific resource configured for hybrid benefit might not get the discount if other resources consume all of the licenses. However, the maximum discount is applied to the scope, based on number of license counts. For more information about how the discounts are applied, see [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md).
+- SQL software discounts are applied to the SQL resources in the scope. The scope is based on the quantity in the license assignments created for the subscription in the account where the resource was created.
+- A specific resource configured for hybrid benefit might not get the discount if other resources consume all of the licenses. However, the maximum discount is applied to the scope, based on number of license counts. For more information about how the discounts are applied, see [What is centrally managed Azure Hybrid Benefit for SQL Server?](overview-azure-hybrid-benefit-scope.md)
## Cancel a license assignment
Review your license situation before you cancel your license assignments. When y
## Next steps - Review the [Centrally managed Azure Hybrid Benefit FAQ](faq-azure-hybrid-benefit-scope.yml).-- Learn about how discounts are applied at [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md).
+- Learn about how discounts are applied at [What is centrally managed Azure Hybrid Benefit for SQL Server?](overview-azure-hybrid-benefit-scope.md)
cost-management-billing Manage Licenses Centrally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/manage-licenses-centrally.md
description: This article provides a detailed explanation about how Azure applie
keywords: Previously updated : 12/06/2022 Last updated : 04/20/2023
# How Azure applies centrally assigned SQL licenses to hourly usage
-This article provides details about how centrally managing Azure Hybrid Benefit for SQL Server at a scope-level works. The process starts with an administrator assigning licenses to subscription or a billing account scope.
+This article provides details about how centrally managing Azure Hybrid Benefit for SQL Server works. The process starts with an administrator assigning licenses to subscription or a billing account scope.
-Each resource reports its usage once an hour using the appropriate full price or pay-as-you-go meters. Internally in Azure, the Usage Application engine evaluates the available NCLs and applies them for that hour. For a given hour of vCore resource consumption, the pay-as-you-go meters are switched to the corresponding Azure Hybrid Benefit meter with a zero ($0) price if there's enough unutilized NCLs in the selected scope.
+Each resource reports its usage once an hour using the appropriate full price or pay-as-you-go meters. Internally in Azure, the Usage Application engine evaluates the available normalized cores (NCs) and applies them for that hour. For a given hour of vCore resource consumption, the pay-as-you-go meters are switched to the corresponding Azure Hybrid Benefit meter with a zero ($0) price if there's enough unutilized NCs in the selected scope.
## License discount
-The following diagram shows the discounting process when there's enough unutilized NCLs to discount the entire vCore consumption by all the SQL resources for the hour.
+The following diagram shows the discounting process when there's enough unutilized NCs to discount the entire vCore consumption by all the SQL resources for the hour.
-Prices shown in the following image are for example purposes only.
+Prices shown in the following image are only examples.
:::image type="content" source="./media/manage-licenses-centrally/fully-discounted-consumption.svg" alt-text="Diagram showing fully discounted vCore consumption." border="false" lightbox="./media/manage-licenses-centrally/fully-discounted-consumption.svg":::
-When the vCore consumption by the SQL resources in the scope exceeds the number of unutilized NCLs, the excess vCore consumption is billed using the appropriate pay-as-you-go meter. The following diagram shows the discounting process when the vCore consumption exceeds the number of unutilized NCLs.
+When the vCore consumption by the SQL resources in the scope exceeds the number of unutilized NCs, the excess vCore consumption is billed using the appropriate pay-as-you-go meter. The following diagram shows the discounting process when the vCore consumption exceeds the number of unutilized NCs.
-Prices shown in the following image are for example purposes only.
+Prices shown in the following image are only examples.
:::image type="content" source="./media/manage-licenses-centrally/partially-discounted-consumption.svg" alt-text="Diagram showing partially discounted consumption." border="false" lightbox="./media/manage-licenses-centrally/partially-discounted-consumption.svg":::
The Azure SQL resources covered by the assigned Core licenses can vary from hour
The following diagram shows how the assigned SQL Server licenses apply over time to get the maximum Azure Hybrid Benefit discount. ## Next steps
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
Title: What is centrally managed Azure Hybrid Benefit?
+ Title: What is centrally managed Azure Hybrid Benefit for SQL Server?
description: Azure Hybrid Benefit is a licensing benefit that lets you bring your on-premises core-based Windows Server and SQL Server licenses with active Software Assurance (or subscription) to Azure. keywords: Previously updated : 03/21/2023 Last updated : 04/20/2023
-# What is centrally managed Azure Hybrid Benefit?
+# What is centrally managed Azure Hybrid Benefit for SQL Server?
Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance or subscription-enabled Windows Server and SQL Server licenses on Azure. For more information, see [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
To use centrally managed licenses, you must have a specific role assigned to you
- Billing profile contributor If you don't have one of the roles, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
-At a high level, here's how it works:
+At a high level, here's how centrally managed Azure Hybrid Benefit works:
1. First, confirm that all your SQL Server VMs are visible to you and Azure by enabling automatic registration of the self-installed SQL server images with the IaaS extension. For more information, see [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk).
-1. Under **Cost Management + Billing** in the Azure portal, you (the billing administrator) choose the scope and the number of qualifying licenses that you want to assign to cover the resources in the scope.
- :::image type="content" source="./media/overview-azure-hybrid-benefit-scope/set-scope-assign-licenses.png" alt-text="Screenshot showing setting a scope and assigning licenses." lightbox="./media/overview-azure-hybrid-benefit-scope/set-scope-assign-licenses.png" :::
+1. Under **Cost Management + Billing** in the Azure portal, you (the billing administrator) choose the scope and coverage option for the number of qualifying licenses that you want to assign.
+1. Select the date that you want to review the license assignment. For example, you might set it to the agreement renewal or anniversary date, or the subscription renewal date, for the source of the licenses.
-In the previous example, detected usage for 108 normalized core licenses is needed to cover all eligible Azure SQL resources. Detected usage for individual resources was 56 normalized core licenses. For the example, we showed 60 standard core licenses plus 12 Enterprise core licenses (12 * 4 = 48). So 60 + 48 = 108. Normalized core license values are covered in more detail in the following section, [How licenses apply to Azure resources](#how-licenses-apply-to-azure-resources).
-- Each hour as resources in the scope run, Azure automatically assigns the licenses to them and discounts the costs correctly. Different resources can be covered each hour.
+Let's break down the previous example.
+
+- Detected usage shows that 8 SQL Server standard core licenses and 8 enterprise licenses (equaling 40 normalized cores) need to be assigned to keep the existing level of Azure Hybrid Benefit coverage.
+- To expand coverage to all eligible Azure SQL resources, you need to assign 10 standard and 10 enterprise core licenses (equaling 50 normalized cores).
+ - Normalized cores needed = 1 x (SQL Server standard core license count) + 4 x (enterprise core license count).
+ - From the example again: 1 x (10 standard) + 4 x (10 enterprise) = 50 normalized cores.
+
+Normalized core values are covered in more detail in the following section, [How licenses apply to Azure resources](#how-licenses-apply-to-azure-resources).
+
+Here's a brief summary of how centralized Azure Hybrid Benefit management works:
+
+- Each hour as resources in the scope run, Azure automatically applies the licenses to them and discounts the costs correctly. Different resources can be covered each hour.
- Any usage above the number of assigned licenses is billed at normal, pay-as-you-go prices. - When you choose to manage the benefit by assigning licenses at a scope level, you can't manage individual resources in the scope any longer. The original resource-level way to enable Azure Hybrid Benefit is still available for SQL Server and is currently the only option for Windows Server. It involves a DevOps role selecting the benefit for each individual resource (like a SQL Database or Windows Server VM) when you create or manage it. Doing so results in the hourly cost of that resource being discounted. For more information, see [Azure Hybrid Benefit for Windows Server](/azure/azure-sql/azure-hybrid-benefit).
-Enabling centralized management of Azure Hybrid Benefit of for SQL Server at a subscription or account scope level is currently in preview. It's available to enterprise customers and to customers that buy directly from Azure.com with a Microsoft Customer Agreement. We hope to extend the capability to Windows Server and more customers.
+You can enable centralized management of Azure Hybrid Benefit for SQL Server at a subscription or account scope level. It's available to enterprise customers and to customers that buy directly from Azure.com with a Microsoft Customer Agreement. ItΓÇÖs not currently available for Windows Server customers or to customers who work with a Cloud Solution Provider (CSP) partner that manages Azure for them.
## Qualifying SQL Server licenses
Resource-level Azure Hybrid Benefit management can cover all of those points, to
You get the following benefits: - **A simpler, more scalable approach with better control** - The billing administrator directly assigns available licenses to one or more Azure scopes. The original approach, at a large scale, involves coordinating Azure Hybrid Benefit usage across many resources and DevOps owners.-- **An easy-to-use way to optimize costs** - An Administrator can monitor Azure Hybrid Benefit utilization and directly adjust licenses assigned to Azure. For example, an administrator might see an opportunity to save more money by assigning more licenses to Azure. Then they speak with their procurement department to confirm license availability. Finally, they can easily assign the licenses to Azure and start saving.-- **A better method to manage costs during usage spikes** - You can easily scale up the same resource or add more resources during temporary spikes. You don't need to assign more SQL Server licenses (for example, closing periods or increased holiday shopping). For short-lived workload spikes, pay-as-you-go charges for the extra capacity might cost less than acquiring more licenses to use Azure Hybrid Benefit for the capacity. Managing the benefit at a scope, rather than at a resource-level, helps you to decide based on aggregate usage.
+- **An easy-to-use way to optimize costs** - An Administrator can monitor Azure Hybrid Benefit utilization and directly adjust licenses assigned to Azure. Track SQL Server license utilization and optimize costs to proactively identify other licenses. It helps to maximize savings and receive notifications when license agreements need to be refreshed. For example, an administrator might see an opportunity to save more money by assigning more licenses to Azure. Then they speak with their procurement department to confirm license availability. Finally, they can easily assign the licenses to Azure and start saving.
+- **A better method to manage costs during usage spikes** - You can easily scale up the same resource or add more resources during temporary spikes. You don't need to assign more SQL Server licenses (for example, closing periods or increased holiday shopping). For short-lived workload spikes, pay-as-you-go charges for the extra capacity might cost less than acquiring more licenses to use Azure Hybrid Benefit for the capacity. When you manage the benefit at a scope, rather than at a resource-level, helps you to decide based on aggregate usage.
- **Clear separation of duties to sustain compliance** - In the resource-level Azure Hybrid Benefit model, resource owners might select Azure Hybrid Benefit when there are no licenses available. Or, they might *not* select the benefit when there *are* licenses available. Scope-level management of Azure Hybrid Benefit solves this situation. The billing admins that manage the benefit centrally are positioned to confirm with procurement and software asset management departments how many licenses are available to assign to Azure. The following diagram illustrates the point. :::image type="content" source="./media/overview-azure-hybrid-benefit-scope/duty-separation.svg" alt-text="Diagram showing the separation of duties." border="false" lightbox="./media/overview-azure-hybrid-benefit-scope/duty-separation.svg":::
Both SQL Server Enterprise (core) and SQL Server Standard (core) licenses with S
One rule to understand: One SQL Server Enterprise Edition license has the same coverage as _four_ SQL Server Standard Edition licenses, across all qualified Azure SQL resource types.
-To explain how it works further, the term _normalized core license_ or NCL is used. In alignment with the rule, one SQL Server Standard core license produces one NCL. One SQL Server Enterprise core license produces four NCLs. For example, if you assign four SQL Server Enterprise core licenses and seven SQL Server Standard core licenses, your total coverage and Azure Hybrid Benefit discounting power is equal to 23 NCLs (4\*4+7\*1).
-
-The following table summarizes how many NCLs you need to fully discount the SQL Server license cost for different resource types. Scope-level management of Azure Hybrid Benefit strictly applies the rules in the product terms, summarized as follows.
+The following table summarizes how many normalized cores (NCs) you need to fully discount the SQL Server license cost for different resource types. Scope-level management of Azure Hybrid Benefit strictly applies the rules in the product terms, summarized as follows.
-| **Azure Data Service** | **Service tier** | **Required number of NCLs** |
+| **Azure Data Service** | **Service tier** | **Required number of NCs** |
| | | | | SQL Managed Instance or Instance pool | Business Critical | 4 per vCore | | SQL Managed Instance or Instance pool | General Purpose | 1 per vCore |
The following table summarizes how many NCLs you need to fully discount the SQL
┬╣ *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
-┬▓ *Subject to a minimum of four vCores per Virtual Machine, which translates to four NCL if Standard edition is used, and 16 NCL if Enterprise edition is used.*
+┬▓ *Subject to a minimum of four vCores per Virtual Machine, which translates to four NCs if Standard edition is used, and 16 NCs if Enterprise edition is used.*
## Ongoing scope-level management
-We recommend that you establish a proactive rhythm when centrally managing Azure Hybrid Benefit, similar to the following tasks and order.
+We recommend that you establish a proactive rhythm when centrally managing Azure Hybrid Benefit, like the following tasks and order.
- Engage within your organization to understand how many Azure SQL resources and vCores will be used during the next month, quarter, or year.-- Work with your procurement and software asset management departments to determine if enough SQL core licenses with Software Assurance are available. The benefit allows licenses supporting migrating workloads to be used both on-premises and in Azure for up to 180 days. So, those licenses can be counted as available.
+- Work with your procurement and software asset management departments to determine if enough SQL core licenses with Software Assurance (or subscription core licenses) are available. The benefit allows licenses supporting migrating workloads to be used both on-premises and in Azure for up to 180 days. So, those licenses can be counted as available.
- Assign available licenses to cover your current usage _and_ your expected usage growth during the upcoming period. - Monitor assigned license utilization. - If it approaches 100%, then consult others in your organization to understand expected usage. Confirm license availability then assign more licenses to the scope.
- - If usage is 100%, you might be using resources beyond the number of licenses assigned. Return to the [Create license assignment experience](create-sql-license-assignments.md) and review the usage that Azure shows. Then assign more available licenses to the scope for more coverage.
+ - If usage is 100%, you might be using resources beyond the number of licenses assigned. Return to the [Add Azure Hybrid Benefit experience](create-sql-license-assignments.md) and review the usage. Then assign more available licenses to the scope for more coverage.
- Repeat the proactive process periodically. ## Next steps
cost-management-billing Sql Iaas Extension Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/sql-iaas-extension-registration.md
+
+ Title: SQL IaaS extension registration options for Cost Management administrators
+description: This article explains the SQL IaaS extension registration options available to Cost Management administrators.
+keywords:
++ Last updated : 04/20/2023++++++
+# SQL IaaS extension registration options for Cost Management administrators
+
+This article helps Cost Management administrators understand and address the SQL IaaS registration requirement before they use centrally managed Azure Hybrid Benefit for SQL Server. The article explains the steps that you, or someone in your organization, uses to complete to register SQL Server with the SQL IaaS Agent extension. HereΓÇÖs the order of steps that you follow. We cover the steps with more detail later in the article.
+
+1. Determine whether you already have the required Azure permissions needed. You attempt to check to verify that registration is already done.
+1. If you don't have the required permissions, you must find someone in your organization that has the required permissions to help you.
+1. Complete the check to verify whether registration is already done for your subscriptions. If registration is done, you can go ahead to use centrally managed Azure Hybrid Benefit.
+1. If registration isnΓÇÖt complete, then you or the person assisting you need to choose one of the options to complete the registration.
+
+## Before you begin
+
+Normally, you can use the Azure portal to view Azure VMs that are running SQL Server on the [SQL virtual machines page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines). However, there are some situations where Azure can't detect that SQL Server is running in a virtual machine. The most common situation is when SQL Server VMs are created using custom images that run SQL Server 2014 or earlier. Or if the [SQL CEIP service](/sql/sql-server/usage-and-diagnostic-data-configuration-for-sql-server) is disabled or blocked.
+
+When the Azure portal doesn't detect SQL Server running on your VMs, it's a problem because you can't fully manage Azure SQL. In this situation, you can't verify that you have enough licenses needed to cover your SQL Server usage. Microsoft provides a way to resolve this problem with _SQL IaaS Agent extension registration_. At a high level, SQL IaaS Agent extension registration works in the following manner:
+
+1. You give Microsoft authorization to detect SQL VMs that aren't detected by default.
+2. The registration process runs at a subscription level or overall customer level. When registration completes, all current and future SQL VMs in the registration scope become visible.
+
+You must complete SQL IaaS Agent extension registration before you can use [centrally managed Azure Hybrid Benefit for SQL Server](create-sql-license-assignments.md). Otherwise, you can't use Azure to manage all your SQL Servers running in Azure.
+
+>[!NOTE]
+> Avoid using centralized managed Azure Hybrid Benefit for SQL Server before you complete SQL IaaS Agent extension registration. If you use centralized Azure Hybrid Benefit before you complete SQL IaaS Agent extension registration, new SQL VMs may not be covered by the number of licenses you have assigned. This situation could lead to incorrect license assignments and might result in unnecessary pay-as-you-go charges for SQL Server licenses. Complete SQL IaaS Agent extension registration before you use centralized Azure Hybrid Benefit features.
+
+## Scenarios and options
+
+The following sections help Cost Management users understand their options and the detailed steps for how to complete SQL IaaS Agent extension registration.
+
+## Determine your permissions
+
+You must have client credentials used to view or register your virtual machines with any of the following Azure roles:
+
+- **Virtual Machine contributor**
+- **Contributor**
+- **Owner**
+
+The permissions are required to perform the following procedure.
+
+## Inadequate permission
+
+If you donΓÇÖt have the required permission, get assistance from someone that has one of the required roles.
+
+## Complete the registration check
+
+1. Navigate to the [SQL virtual machines](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) page in the Azure portal.
+2. Select **Automatic SQL Server VM registration** to open the **Automatic registration** page.
+3. If automatic registration is already enabled, a message appears at the bottom of the page indicating `Automatic registration has already been enabled for subscription <SubscriptionName>`.
+4. Repeat this process for any other subscriptions that you want to manage with centralized Azure Hybrid Benefit.
+
+Alternatively, you can run a PowerShell script to determine if there are any unregistered SQL Servers in your environment. You can download the script from the [azure-hybrid-benefit](https://github.com/microsoft/sql-server-samples/tree/master/samples/manage/azure-hybrid-benefit) page on GitHub.
+
+## Options to complete registration
+
+If you determine that you have unregistered SQL Server VMs, use one of the two following methods to complete the registration:
+
+- [Register with the help of your Microsoft account team](#register-with-the-help-of-your-microsoft-account-team)
+- [Turn on SQL IaaS Agent extension automatic registration](#turn-on-sql-iaas-agent-extension-automatic-registration)
+
+### Register with the help of your Microsoft account team
+
+The most comprehensive way to register is at the overall customer level. For both of the following situations, contact your Microsoft account team.
+
+- Your Microsoft account team can help you add a small amendment that accomplishes the authorization in an overarching way if:
+ - If you have an Enterprise Agreement that's renewing soon
+ - If you're a Microsoft Customer Agreement Enterprise customer
+- If you have an Enterprise Agreement that isn't up for renewal, there's another option. A leader in your organization can use an email template to provide Microsoft with authorization.
+ >[!NOTE]
+ > This option is time-limited, so if you want to use it, you should investigate it soon.
+
+### Turn on SQL IaaS Agent extension automatic registration
+
+You can use the self-serve registration capability, described at [Automatic registration with SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms).
+
+Because of the way that roles and permissions work in Azure, including segregation of duties, you may not be able to access or complete the extension registration process yourself. If you're in that situation, you need find the subscription contributors for the scope you want to register. Then, get their help to complete the process.
+
+You can enable automatic registration in the Azure portal for a single subscription, or for multiple subscriptions suing the PowerShell script mentioned previously. We recommend that you complete the registration process for all of your subscriptions so you can view all of your Azure SQL infrastructure.
+
+The following [Managing Azure VMs with the SQL IaaS Agent Extension](https://www.youtube.com/watch?v=HqU0HH1vODg) video shows how the process works.
+
+>[!VIDEO https://www.youtube.com/embed/HqU0HH1vODg]
+
+## Registration duration and verification
+
+After you complete either of the preceding automatic registration options, it can take up to 48 hours to detect all your SQL Servers. When complete, all your SQL Server virtual machines should be visible on the [SQL virtual machines](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) page in the Azure portal.
+
+## When registration completes
+
+After you complete the SQL IaaS Extension registration, we recommended you use Azure Hybrid Benefit for centralized management. If you're unsure whether registration is finished, you can use the steps in [Complete the Registration Check](#complete-the-registration-check).
+
+## Next steps
+
+When you're ready, [Create SQL Server license assignments for Azure Hybrid Benefit](create-sql-license-assignments.md). Centrally managed Azure Hybrid Benefit is designed to make it easy to monitor your Azure SQL usage and optimize costs.
cost-management-billing Sql Server Hadr Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/sql-server-hadr-licenses.md
description: This article explains how the SQL Server HADR Software Assurance be
keywords: Previously updated : 12/06/2022 Last updated : 04/20/2022
cost-management-billing Transition Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/transition-existing.md
description: This article describes the changes and several transition scenarios
keywords: Previously updated : 12/06/2022 Last updated : 04/20/2023
When you assign licenses to a subscription using the new experience, changes are
When you enroll in the scope-level management of Azure Hybrid Benefit experience, youΓÇÖll see your current Azure Hybrid Benefit usage thatΓÇÖs enabled for individual resources. For more information on the overall experience, see [Create SQL Server license assignments for Azure Hybrid Benefit](create-sql-license-assignments.md). If you're a subscription contributor and you donΓÇÖt have the billing administrator role required, you can analyze the usage of different types of SQL Server licenses in Azure by using a PowerShell script. The script generates a snapshot of the usage across multiple subscriptions or the entire account. For details and examples of using the script, see the [sql-license-usage PowerShell script](https://github.com/anosov1960/sql-server-samples/tree/master/samples/manage/azure-hybrid-benefit) example script. Once youΓÇÖve run the script, identify and engage your billing administrator about the opportunity to shift Azure Hybrid Benefit management to the subscription or billing account scope level. > [!NOTE]
-> The script includes support for normalized core licenses (NCL).
+> The script includes support for normalized cores (NC).
## HADR benefit for SQL Server VMs
cost-management-billing Tutorial Azure Hybrid Benefits Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/tutorial-azure-hybrid-benefits-sql.md
Title: Tutorial - Optimize centrally managed Azure Hybrid Benefit for SQL Server
description: This tutorial guides you through proactively assigning SQL Server licenses in Azure to manage and optimize Azure Hybrid Benefit. Previously updated : 12/06/2022 Last updated : 04/20/2022
Before you begin, ensure that you:
Have read and understand the [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md) article. The article explains the types of SQL Server licenses that quality for Azure Hybrid Benefit. It also explains how to enable the benefit for the subscription or billing account scopes you select. > [!NOTE]
-> Managing Azure Hybrid Benefit centrally at a scope-level is currently in public preview and limited to enterprise customers and customers buying directly from Azure.com with a Microsoft Customer Agreement.
+> Managing Azure Hybrid Benefit centrally at a scope-level is limited to enterprise customers and customers buying directly from Azure.com with a Microsoft Customer Agreement.
Verify that your self-installed virtual machines running SQL Server in Azure are registered before you start to use the new experience. Doing so ensures that Azure resources that are running SQL Server are visible to you and Azure. For more information about registering SQL VMs in Azure, see [Register SQL Server VM with SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm) and [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk).
After you've read the preceding instructions in the article, you understand that
Then, do the following steps. 1. Use the preceding instructions to make sure self-installed SQL VMs are registered. They include talking to subscription owners to complete the registration for the subscriptions where you don't have sufficient permissions.
-1. You review Azure resource usage data from recent months and you talk to others in Contoso. You determine that 2000 SQL Server Enterprise Edition and 750 SQL Server Standard Edition core licenses, or 8750 normalized core licenses, are needed to cover expected Azure SQL usage for the next year. Expected usage also includes migrating workloads (1500 SQL Server Enterprise Edition + 750 SQL Server Standard Edition = 6750 normalized) and net new Azure SQL workloads (another 500 SQL Server Enterprise Edition or 2000 normalized core licenses).
+1. You review Azure resource usage data from recent months and you talk to others in Contoso. You determine that 2000 SQL Server Enterprise Edition and 750 SQL Server Standard Edition core licenses, or 8750 normalized cores, are needed to cover expected Azure SQL usage for the next year. Expected usage also includes migrating workloads (1500 SQL Server Enterprise Edition + 750 SQL Server Standard Edition = 6750 normalized) and net new Azure SQL workloads (another 500 SQL Server Enterprise Edition or 2000 normalized cores).
1. Next, confirm with your with procurement team that the needed licenses are already available or will soon be purchased. The confirmation ensures that the licenses are available to assign to Azure. - Licenses you have in use on premises can be considered available to assign to Azure if the associated workloads are being migrated to Azure. As mentioned previously, Azure Hybrid Benefit allows dual use for up to 180 days.
- - You determine that there are 1800 SQL Server Enterprise Edition licenses and 2000 SQL Server Standard Edition licenses available to assign to Azure. The available licenses equal 9200 normalized core licenses. That's a little more than the 8750 needed (2000 x 4 + 750 = 8750).
-1. Then, you assign the 1800 SQL Server Enterprise Edition and 2000 SQL Server Standard Edition to Azure. That action results in 9200 normalized core licenses that the system can apply to Azure SQL resources as they run each hour. Assigning more licenses than are required now provides a buffer if usage grows faster than you expect.
+ - You determine that there are 1800 SQL Server Enterprise Edition licenses and 2000 SQL Server Standard Edition licenses available to assign to Azure. The available licenses equal 9200 normalized cores. That's a little more than the 8750 needed (2000 x 4 + 750 = 8750).
+1. Then, you assign the 1800 SQL Server Enterprise Edition and 2000 SQL Server Standard Edition to Azure. That action results in 9200 normalized cores that the system can apply to Azure SQL resources as they run each hour. Assigning more licenses than are required now provides a buffer if usage grows faster than you expect.
Afterward, you monitor assigned license usage periodically, ideally monthly. After 10 months, usage approaches 95%, indicating faster Azure SQL usage growth than you expected. You talk to your procurement team to get more licenses so that you can assign them.
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
This article provides suggestions to troubleshoot common problems with the Azure
## Error code: SqlDeniedPublicAccess -- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Vritual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Auzre SQL Networking setting.`
+- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Vritual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Azure SQL Networking setting.`
- **Causes**: Azure SQL Database is set to deny public network access. This requires to use managed virtual network and create private endpoint to access.
data-factory Data Flow Flatten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flatten.md
Previously updated : 08/03/2022 Last updated : 04/21/2023 # Flatten transformation in mapping data flow
Use the flatten transformation to take array values inside hierarchical structur
## Configuration
-The flatten transformation contains the following configuration settings
+The flatten transformation contains the following configuration settings.
### Unroll by
-Select an array to unroll. The output data will have one row per item in each array. If the unroll by array in the input row is null or empty, there will be one output row with unrolled values as null.
+Select an array to unroll. The output data will have one row per item in each array. If the unroll by array in the input row is null or empty, there will be one output row with unrolled values as null. You have the option to unroll more than one array per Flatten transformation. Click on the plus (+) button to include multiple arrays in a single Flatten transformation. You can use ADF data flow meta functions here including ```name``` and ```type``` and use pattern matching to unroll arrays that match those criteria. When including multiple arrays in a single Flatten transformation, your results will be a cartesian product of all of the possible array values.
+ ### Unroll root
Optional setting that tells the service to handle all subcolumns of a complex ob
### Hierarchy level
-Choose the level of the hierarchy that you would like expand.
+Choose the level of the hierarchy that you would like to expand.
### Name matches (regex)
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
The Azure side includes the Data Factory mapping data flow that can transform an
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP CDC solution.":::
-To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime.
+To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime. A staging storage is required to be configured in data flow activity to make your self-hosted integration runtime work seamlessly with Data Flow integration runtime.
The SAP CDC connector uses the SAP ODP framework to extract various data source types, including:
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
In Azure Data Factory Studio, [create and configure a self-hosted integration ru
The more CPU cores you have on the computer running the self-hosted integration runtime, the higher your data extraction throughput is. For example, an internal test achieved a higher than 12-MB/s throughput when running parallel extractions on a self-hosted integration runtime computer that has 16 CPU cores.
+> [!NOTE]
+> If you want to use shared self hosted integration runtime from another Data Factory, you need to make sure your Data Factory is in the same region of another Data Factory. What is more, your Data Flow integration runtime need to be configured to "Auto Resolve" or the same region of your Data Factory.
+ ## Download and install the SAP .NET connector Download the latest [64-bit SAP .NET Connector (SAP NCo 3.0)](https://support.sap.com/en/product/connectors/msnet.html) and install it on the computer running the self-hosted integration runtime. During installation, in the **Optional setup steps** dialog, select **Install assemblies to GAC**, and then select **Next**.
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
access SQL Managed Instance from Managed VNET using Private Endpoint.
:::image type="content" source="./media/tutorial-managed-virtual-network/sql-mi-access-model.png" alt-text="Screenshot that shows the access model of SQL MI." lightbox="./media/tutorial-managed-virtual-network/sql-mi-access-model-expanded.png":::
+> [!NOTE]
+> When using this solution to connect to Azure SQL Database Managed Instance, **"Redirect"** connection policy is not supported, you need to switch to **"Proxy"** mode.
+++ ## Prerequisites * **Azure subscription**. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
data-manager-for-agri How To Set Up Sensors Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md
Hence to enable authentication & authorization, partners will need to do the fol
Partners can access the APIs in customer tenant using the multi-tenant Azure Active Directory App, registered in Azure Active Directory. App registration is done on the Azure portal so the Microsoft identity platform can provide authentication and authorization services for your application which in turn accesses Data Manager for Agriculture.
-Follow the steps provided in <a href="https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#register-an-application" target="_blank">App Registration</a> **until the Step 8** to generate the following information:
+Follow the steps provided in [App Registration](/azure/active-directory/develop/quickstart-register-app#register-an-application) **until the Step 8** to generate the following information:
1. **Application (client) ID** 2. **Directory (tenant) ID**
Copy and store all three values as you would need them for generating access tok
The Application (client) ID created is like the User ID of the application, and now you need to create its corresponding Application password (client secret) for the application to identify itself.
-Follow the steps provided in <a href="https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#add-a-client-secret" target="_blank">Add a client secret</a> to generate **Client Secret** and copy the client secret generated.
+Follow the steps provided in [Add a client secret](/azure/active-directory/develop/quickstart-register-app#add-a-client-secret) to generate **Client Secret** and copy the client secret generated.
### Registration
Based on the sensors that customers use and their respective sensor partnerΓÇÖs
Customers who choose to onboard to a specific partner will know the app ID of that specific partner. Now using the app ID customer will need to do the following things in sequence.
-1. **Consent** ΓÇô Since the partnerΓÇÖs app resides in a different tenant and the customer wants the partner to access certain APIs in their Data Manager for Agriculture instance, the customers are required to call a specific endpoint (https://login.microsoft.com/common/adminconsent/clientId=[client_id]) and replace the [client_id] with the partnersΓÇÖ app ID. This enables the customersΓÇÖ Azure Active Directory to recognize this APP ID whenever they use it for role assignment.
+1. **Consent** ΓÇô Since the partnerΓÇÖs app resides in a different tenant and the customer wants the partner to access certain APIs in their Data Manager for Agriculture instance, the customers are required to call a specific endpoint `https://login.microsoft.com/common/adminconsent/clientId=[client_id]` and replace the [client_id] with the partnersΓÇÖ app ID. This enables the customersΓÇÖ Azure Active Directory to recognize this APP ID whenever they use it for role assignment.
2. **Identity Access Management (IAM)** ΓÇô As part of Identity access management, customers will create a new role assignment to the above app ID which was provided consent. Data Manager for Agriculture will create a new role called Sensor Partner (In addition to the existing Admin, Contributor, Reader roles). Customers will choose the sensor partner role and add the partner app ID and provide access.
databox-online Azure Stack Edge Gpu Clustering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
Previously updated : 02/22/2022 Last updated : 04/18/2023
Before you configure clustering on your device, you must cable the devices as pe
1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource). 1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-gpu-deploy-install.md#cable-the-device). 1. Start cluster creation on the first node. Choose the network topology that conforms to the cabling across the two nodes. The chosen topology would dictate the storage and clustering traffic between the nodes. See detailed steps in [Configure network and web proxy on your device](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
-1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Get the authentication token on this node.
+1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Ensure that port settings match between same port name on each appliance. Get the authentication token on this node.
1. Use the authentication token from the prepared node and join this node to the first node to form a cluster. 1. Set up a cloud witness using an Azure Storage account or a local witness on an SMB fileshare. 1. Assign a virtual IP to provide an endpoint for Azure Consistent Services or when using NFS.
Before you configure clustering on your device, you must cable the devices as pe
1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource). 1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-pro-2-deploy-install.md#cable-the-device). 1. Start cluster creation on the first node. Choose the network topology that conforms to the cabling across the two nodes. The chosen topology would dictate the storage and clustering traffic between the nodes. See detailed steps in [Configure network and web proxy on your device](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md).
-1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Get the authentication token on this node.
+1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Ensure that port settings match between same port name on each appliance. Get the authentication token on this node.
1. Use the authentication token from the prepared node and join this node to the first node to form a cluster. 1. Set up a cloud witness using an Azure Storage account or a local witness on an SMB fileshare. 1. Assign a virtual IP to provide an endpoint for Azure Consistent Services or when using NFS.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 04/18/2023 Last updated : 04/20/2023 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High | | **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) | Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high | | **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Last updated 04/13/2023
# Reference list of attack paths and cloud security graph components
-This article lists the attack paths, connections, and insights used in Defender for Cloud Security Posture Management (CSPM).
+This article lists the attack paths, connections, and insights used in Defender Cloud Security Posture Management (CSPM).
- You need to [enable Defender CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths. - What you see in your environment depends on the resources you're protecting, and your customized configuration.
Learn more about [the cloud security graph, attack path analysis, and the cloud
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-manage-attack-path.md#availability) for attack paths.
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed VM has high severity vulnerabilities | A virtual machine is reachable from the internet and has high severity vulnerabilities. | | Internet exposed VM has high severity vulnerabilities and high permission to a subscription | A virtual machine is reachable from the internet, has high severity vulnerabilities, and identity and permission to a subscription. |
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
| VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. |
-### AWS Instances
+### AWS EC2 instances
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md).
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to an account. | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to a database. |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
### Azure data
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | | Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
### AWS data
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed AWS S3 Bucket with sensitive data is publicly accessible (Preview) | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | |Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed Kubernetes pod is running a container with RCE vulnerabilities | An internet exposed Kubernetes pod in a namespace is running a container using an image that has vulnerabilities allowing remote code execution. | | Kubernetes pod running on an internet exposed node uses host network is running a container with RCE vulnerabilities | A Kubernetes pod in a namespace with host network access enabled is exposed to the internet via the host network. The pod is running a container using an image that has vulnerabilities allowing remote code execution. |
Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md
Prerequisite: [Enable Defender for DevOps](defender-for-devops-introduction.md).
-| Attack Path Display Name | Attack Path Description |
+| Attack path display name | Attack path description |
|--|--| | Internet exposed GitHub repository with plaintext secret is publicly accessible (Preview) | A GitHub repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. | ## Cloud security graph components list
-This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
+This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
### Insights
defender-for-cloud Defender For Storage Data Sensitivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-data-sensitivity.md
This is a configurable feature in the new Defender for Storage plan. You can cho
Learn more about [scope and limitations of sensitive data scanning](concept-data-security-posture-prepare.md).
-## How does the Sensitive Data Discovery work?
+## How does sensitive data discovery work?
-Sensitive Data Threat Detection is powered by the Sensitive Data Discovery engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
+Sensitive data threat detection is powered by the sensitive data discovery engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
The service is integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings. This ensures that the detection and protection of sensitive data aligns with your established policies and procedures. :::image type="content" source="media/defender-for-storage-data-sensitivity/data-sensitivity-cspm-storage.png" alt-text="Diagram showing how Defender CSPM and Defender for Storage combine to provide data-aware security.":::
-Upon enablement, the Sensitive Data Discovery engine initiates an automatic scanning process across all supported storage accounts. Results are typically generated within 24 hours. Additionally, newly created storage accounts under protected subscriptions will be scanned within six hours of their creation. Recurring scans are scheduled to occur weekly after the enablement date. This is the same Sensitive Data Discovery engine used for sensitive data discovery in Defender CSPM.
+Upon enablement, the engine initiates an automatic scanning process across all supported storage accounts. Results are typically generated within 24 hours. Additionally, newly created storage accounts under protected subscriptions are scanned within six hours of their creation. Recurring scans are scheduled to occur weekly after the enablement date. This is the same engine that Defender CSPM uses to discover sensitive data.
## Prerequisites
-Sensitive data threat detection is available for Blob storage accounts, including: Standard general-purpose V1, Standard general-purpose V2, Azure Data Lake Storage Gen2 and Premium block blobs. Learn more about the [availability of Defender for Storage features](defender-for-storage-introduction.md#availability).
+Sensitive data threat detection is available for Blob storage accounts, including: Standard general-purpose V1, Standard general-purpose V2, Azure Data Lake Storage Gen2, and Premium block blobs. Learn more about the [availability of Defender for Storage features](defender-for-storage-introduction.md#availability).
-To enable sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions.
-Learn more about the [roles and permissions](support-matrix-defender-for-storage.md) required for sensitive data threat detection.
+To enable sensitive data threat detection at subscription and storage account levels, you need to have the relevant data-related permissions from the **Subscription owner** or **Storage account owner** roles. Learn more about the [roles and permissions required for sensitive data threat detection](support-matrix-defender-for-storage.md).
## Enabling sensitive data threat detection
-Sensitive data threat detection is enabled by default when you enable Defender for Storage. You can [enable it or disable it](../storage/common/azure-defender-storage-configure.md) in the Azure portal or with other at-scale methods at no additional cost.
+Sensitive data threat detection is enabled by default when you enable Defender for Storage. You can [enable it or disable it](../storage/common/azure-defender-storage-configure.md) in the Azure portal or with other at-scale methods. This feature is included in the price of Defender for Storage.
## Using the sensitivity context in the security alerts
-Sensitive Data Threat Detection capability will help you to prioritize security incidents, allowing security teams to prioritize these incidents and respond on time. Defender for Storage alerts will include findings of sensitivity scanning and indications of operations that have been performed on resources containing sensitive data.
+The sensitive data threat detection capability helps security teams identify and prioritize data security incidents for faster response times. Defender for Storage alerts include findings of sensitivity scanning and indications of operations that have been performed on resources containing sensitive data.
-In the alertΓÇÖs Extended Properties, you can find sensitivity scanning findings for a **blob container**:ΓÇ»
+In the alertΓÇÖs extended properties, you can find sensitivity scanning findings for a **blob container**:ΓÇ»
- Sensitivity scanning time UTCΓÇ»- when the last scan was performed - Top sensitivity label - the most sensitive label found in the blob container
In the alertΓÇÖs Extended Properties, you can find sensitivity scanning findings
## Integrate with the organizational sensitivity settings in Microsoft Purview (optional)
-When you enable sensitive data threat detection, the sensitive data categories include built-in sensitive information types (SITs) default list of Microsoft Purview. This will affect the alerts you receive from Defender for Storage and storage or containers that are found to contain these SITs are marked as containing sensitive data.
+When you enable sensitive data threat detection, the sensitive data categories include built-in sensitive information types (SITs) in the default list of Microsoft Purview. This will affect the alerts you receive from Defender for Storage: storage or containers that are found with these SITs are marked as containing sensitive data.
To customize the Data Sensitivity Discovery for your organization, you can [create custom sensitive information types (SITs)](/microsoft-365/compliance/create-a-custom-sensitive-information-type) and connect to your organizational settings with a single step integration. Learn more [here](episode-two.md).
You also can create and publish sensitivity labels for your tenant in Microsoft
## Next steps
-In this article, you learned about Microsoft Defender for Storage.
+In this article, you learned about Microsoft Defender for Storage's sensitive data scanning.
> [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/18/2023 Last updated : 04/20/2023 # What's new in Microsoft Defender for Cloud?
Updates in April include:
- [Three alerts in the Defender for Resource Manager plan have been deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-have-been-deprecated) - [Alerts automatic export to Log Analytics workspace have been deprecated](#alerts-automatic-export-to-log-analytics-workspace-have-been-deprecated) - [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers)
+- [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services)
### Agentless Container Posture in Defender CSPM (Preview)
You can also view the [full list of alerts](alerts-reference.md#defender-for-ser
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
+### New Azure Active Directory authentication-related recommendations for Azure Data Services
+
+We have added four new Azure Active Directory authentication-related recommendations for Azure Data Services.
+
+| Recommendation Name | Recommendation Description | Policy |
+|--|--|--|
+| Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
+| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. [Learn more](https://aka.ms/Synapse). | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
+| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) |
+| Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
## March 2023 Updates in March include:
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score description: Description of Microsoft Defender for Cloud's secure score and its security controls Previously updated : 03/05/2023 Last updated : 04/20/2023 # Secure score
For more information, see [How your secure score is calculated](secure-score-sec
On the Security posture page, you're able to see the secure score for your entire subscription, and each environment in your subscription. By default all environments are shown. | Page section | Description | |--|--| | :::image type="content" source="media/secure-score-security-controls/select-environment.png" alt-text="Screenshot showing the different environment options."::: | Select your environment to see its secure score, and details. Multiple environments can be selected at once. The page will change based on your selection here.|
-| :::image type="content" source="media/secure-score-security-controls/environment.png" alt-text="Screenshot of the environment section of the security posture page."::: | Shows the total number of subscriptions, accounts and projects that affect your overall score. It also shows how many unhealthy resources and how many recommendations exist in your environments. |
+| :::image type="content" source="media/secure-score-security-controls/environment.png" alt-text="Screenshot of the environment section of the security posture page." lightbox="media/secure-score-security-controls/environment.png"::: | Shows the total number of subscriptions, accounts and projects that affect your overall score. It also shows how many unhealthy resources and how many recommendations exist in your environments. |
The bottom half of the page allows you to view and manage viewing the individual secure scores, number of unhealthy resources and even view the recommendations for all of your individual subscriptions, accounts, and projects.
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
To view your security policies in Defender for Cloud:
**Deny** prevents deployment of non-compliant resources based on recommendation logic.<br> **Disabled** prevents the recommendation from running.
- :::image type="content" source="./media/tutorial-security-policy/default-assignment-screen.png" alt-text="Screenshot showing the edit default assignment screen." lightbox="/media/tutorial-security-policy/default-assignment-screen.png":::
+ :::image type="content" source="./media/tutorial-security-policy/default-assignment-screen.png" alt-text="Screenshot showing the edit default assignment screen." lightbox="./media/tutorial-security-policy/default-assignment-screen.png":::
## Enable a security recommendation
This page explained security policies. For related information, see the followin
- [Learn how to set policies using PowerShell](../governance/policy/assign-policy-powershell.md) - [Learn how to edit a security policy in Azure Policy](../governance/policy/tutorials/create-and-manage.md) - [Learn how to set a policy across subscriptions or on Management groups using Azure Policy](../governance/policy/overview.md)-- [Learn how to enable Defender for Cloud on all subscriptions in a management group](onboard-management-group.md)
+- [Learn how to enable Defender for Cloud on all subscriptions in a management group](onboard-management-group.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 04/18/2023 Last updated : 04/20/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 |
-| [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 |
If you're looking for the latest release notes, you'll find them in the [What's
**Estimated date for change: April 2023**
-We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+We're announcing the full deprecation of support of [PCI DSS](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
We recommend updating custom scripts, workflows, and governance rules to corresp
We've improved the coverage of the V2 identity recommendations by scanning all Azure resources (rather than just subscriptions) which allows security administrators to view role assignments per account. These changes may result in changes to your Secure Score throughout the GA process.
-### Deprecation of legacy compliance standards across cloud environments
-
-**Estimated date for change: April 2023**
-
-We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
-
-Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [`PCI DSS v4`](/azure/compliance/offerings/offering-pci-dss) initiative.
-Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-
-### New Azure Active Directory authentication-related recommendations for Azure Data Services
-
-**Estimated date for change: April 2023**
-
-| Recommendation Name | Recommendation Description | Policy |
-|--|--|--|
-| Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
-| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
-| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) |
-| Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
- ### Multiple changes to identity recommendations **Estimated date for change: May 2023**
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--|
-| Date and Time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert |
+| Date and Time | Date and time that the syslog server machine received the information. |
| Hostname | Sensor IP | | Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--| | Priority | User.Alert |
-| Date and time | Date and time that sensor sent the information |
+| Date and time | Date and time that the sensor sent the information, in UTC format |
| Hostname | Sensor hostname | | Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert |
+| Date and time | Date and time that the sensor sent the information, in UTC format |
| Hostname | Sensor IP | | Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />LEEF:1.0 <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine, and depends on the time-zone configuration. <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
dev-box How To Get Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-get-help.md
+
+ Title: Get support for Microsoft Dev Box
+description: Learn how to choose the appropriate channel to get support for Microsoft Dev Box, and what options you should try before opening a support request.
++++ Last updated : 04/20/2023+++
+# Get support for Microsoft Dev Box
+
+There are multiple channels available to give you help and support in Microsoft Dev Box. The support channel you choose depends on your role and your access to existing Dev Box resources. The fastest way to get help is to use the correct channel.
+
+In the following table, select your role to see how to get help for urgent issues. For non-urgent issues, select an option.
+
+|My role is: |My issue is urgent: |My issue isn't urgent: |
+||-||
+|Administrator </br>(IT admin, Dev infra admin) |[Support for administrators](#support-for-administrators) |[Learn about the Microsoft Developer Community](#discover-the-microsoft-developer-community-for-dev-box)|
+|Dev team lead </br>(Project Admin) |[Support for dev team leads](#support-for-dev-team-leads) |[Search Microsoft Developer Community](https://developercommunity.microsoft.com/devbox) |
+|Developer </br>(Dev Box User)|[Support for developers](#support-for-developers) |[Report to Microsoft Developer Community](https://developercommunity.microsoft.com/devbox/report) |
+
+## Support for administrators
+Administrators include IT admins, Dev infrastructure admins, and anyone who has administrative access to all your Dev Box resources.
+#### 1. Internal troubleshooting
+Always use your internal troubleshooting processes before contacting support. As a dev infrastructure admin, you have access to all Dev Box resources through the Azure portal and through the Azure CLI.
+#### 2. Contact support
+If you can't resolve the issue, open a support request to contact Azure support:
+
+ **[Contact Microsoft Azure Support - Microsoft Support](https://support.microsoft.com/topic/contact-microsoft-azure-support-2315e669-8b1f-493b-5fb1-d88a8736ffe4).**
+
+- To learn more about support requests, refer to: [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+## Support for dev team leads
+Developer team leads are often assigned the DevCenter Project Admin role. Project Admins have access to manage projects and pools.
+#### 1. Internal troubleshooting
+Always use your internal troubleshooting processes before escalating an issue.
+As a DevCenter Project Admin, you can:
+- View the network connections attached to the dev center.
+- View the dev box definitions attached to the dev center.
+- Create, view, update, and delete dev box pools in the project.
+
+#### 2. Contact your dev infrastructure admin
+If you can't resolve the issue, escalate it to your dev infrastructure admin.
+
+## Support for developers
+Developers are assigned the Dev Box User role, which enables you to create and manage your own dev boxes through the developer portal. You don't usually have permissions to manage Dev Box resources in the Azure portal; your dev team lead manages those resources.
+#### 1. Internal troubleshooting
+Always use your internal troubleshooting processes before escalating an issue.
+As a developer, you can troubleshoot your dev boxes through the developer portal. You can create a dev box, start, stop, and delete your dev boxes.
+#### 2. Contact your dev team lead
+If you can't resolve the issue, escalate it to your dev team lead (DevCenter Project Admin).
+## Discover the Microsoft Developer Community for Dev Box
+
+For issues that aren't urgent, get involved with the Microsoft Developer Community. You can discover how others are solving their issues, discuss approaches to working with Dev Box, and suggest new features and enhancements for the product.
+
+#### Find similar issues
+The Microsoft Developer Community can provide useful information on non-urgent issues.
+
+Start by exploring existing feedback for the issue youΓÇÖre experiencing: [Search Microsoft Developer Community](https://developercommunity.microsoft.com/devbox).
+
+#### Report an issue to the product team
+If you donΓÇÖt see your issue in the discussion forum, you can report it to the product team through the Microsoft Developer Community portal: [Report to Microsoft Developer Community](https://developercommunity.microsoft.com/devbox/report).
+
+## Next steps
+
+- To learn more about support requests, refer to: [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md
description: Learn how DevTest Labs makes it easy to create, manage, and monitor
Previously updated : 03/03/2022 Last updated : 04/20/2023 # What is Azure DevTest Labs?
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-convert.md
description: Understand the process of converting industry-standard models into DTDL for Azure Digital Twins Previously updated : 03/05/2023 Last updated : 04/21/2023
You can use this sample to see the conversion patterns in context, and to have a
### OWL2DTDL converter
-The [OWL2DTDL Converter](https://github.com/Azure/opendigitaltwins-tools/tree/master/OWL2DTDL) is a sample that translates an OWL ontology into a set of DTDL interface declarations, which can be used with the Azure Digital Twins service. It also works for ontology networks, made of one root ontology reusing other ontologies through `owl:imports` declarations.
+The [OWL2DTDL Converter](https://github.com/Azure/opendigitaltwins-tools/tree/master/OWL2DTDL) is a sample code base that translates an OWL ontology into a set of DTDL interface declarations, which can be used with the Azure Digital Twins service. It also works for ontology networks, made of one root ontology reusing other ontologies through `owl:imports` declarations. This converter was used to translate the [Real Estate Core Ontology](https://doc.realestatecore.io/3.1/full.html) to DTDL and can be used for any OWL-based ontology.
-This converter was used to translate the [Real Estate Core Ontology](https://doc.realestatecore.io/3.1/full.html) to DTDL and can be used for any OWL-based ontology.
+This sample code isn't a comprehensive solution that supports the entirety of the OWL spec, but it can give you ideas and starting code that you can use in developing your own ontology ingestion pipelines.
## Next steps
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
description: Known issues, limitations and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio Previously updated : 03/14/2023 Last updated : 04/21/2023
Known issues and troubleshooting steps associated with the Azure SQL Migration extension for Azure Data Studio.
-> [!NOTE]
-> When checking migration details using the Azure Portal, Azure Data Studio or PowerShell / Azure CLI you might see the following error: *Operation Id {your operation id} was not found*. This can either be because you provided an operationId as part of an api parameter in your get call that does not exist, or the migration details of your migration were deleted as part of a cleanup operation.
-
+> [!IMPORTANT]
+> The latest version of Integration Runtime (5.28.8488) prevents access to a network file share on a local host. This security measure will lead to failures when performing migrations to Azure SQL using DMS. Please ensure you run Integration Runtime on a different machine than the network share hosting.
## Error code: 2007 - CutoverFailedOrCancelled
WHERE STEP in (3,4,6);
- **Recommendation**: For more troubleshooting steps, see [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108). +
+## Error code: 2049 - FileShareTestConnectionFailed
+
+- **Message**: `The value of the property '' is invalid: 'Access to <share path> is denied, resolved IP address is <IP address>, network type is OnPremise'.`
+
+- **Cause**: The network share where the database backups are stored is in the same machine as the self-hosted Integration Runtime (SHIR).
+
+- **Recommendation**: The latest version of Integration Runtime (**5.28.8488**) prevents access to a network file share on a local host. Please ensure you run Integration Runtime on a different machine than the network share hosting. If hosting the self-hosted Integration Runtime and the network share on different machines is not possible with your current migration setup, you can use the option to opt-out using ```DisableLocalFolderPathValidation```.
+ > [!NOTE]
+ > For more information, see [Set up an existing self-hosted IR via local PowerShell](../data-factory/create-self-hosted-integration-runtime.md#set-up-an-existing-self-hosted-ir-via-local-powershell). Use the disabling option with discretion as this is less secure.
++ ## Error code: 2056 - SqlInfoValidationFailed - **Message**: CollationMismatch: `Source database collation <CollationOptionSource> is not the same as the target database <CollationOptionTarget>. Source database: <SourceDatabaseName> Target database: <TargetDatabaseName>.`
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
For events in the **Event Grid schema**, use the following values for the key: `
For events in **Cloud Events schema**, use the following values for the key: `eventid`, `source`, `eventtype`, `eventtypeversion`, or event data (like `data.key1`).
-For **custom input schema**, use the event data fields (like `data.key1`). To access fields in the data section, use the `.` (dot) notation. For example, `data.sitename`, `data.appEventTypeDetail.action` to access `sitename` or `action` for the following sample event.
+For **custom input schema**, use the event data fields (like `data.key1`). To access fields in the data section, use the `.` (dot) notation. For example, `data.siteName`, `data.appEventTypeDetail.action` to access `siteName` or `action` for the following sample event.
```json "data": {
FOR_EACH filter IN (a, b, c)
All string comparisons aren't case-sensitive. > [!NOTE]
-> If the event JSON doesn't contain the advanced filter key, filter is evaulated as **not matched** for the following operators: NumberGreaterThan, NumberGreaterThanOrEquals, NumberLessThan, NumberLessThanOrEquals, NumberIn, BoolEquals, StringContains, StringNotContains, StringBeginsWith, StringNotBeginsWith, StringEndsWith, StringNotEndsWith, StringIn.
+> If the event JSON doesn't contain the advanced filter key, filter is evaluated as **not matched** for the following operators: NumberGreaterThan, NumberGreaterThanOrEquals, NumberLessThan, NumberLessThanOrEquals, NumberIn, BoolEquals, StringContains, StringNotContains, StringBeginsWith, StringNotBeginsWith, StringEndsWith, StringNotEndsWith, StringIn.
>
->The filter is evaulated as **matched** for the following operators:NumberNotIn, StringNotIn.
+>The filter is evaluated as **matched** for the following operators: NumberNotIn, StringNotIn.
## IsNullOrUndefined
event-grid Event Schema Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-data-manager-for-agriculture.md
Title: Azure Data Manager for Agriculture description: Describes the properties that are provided for Azure Data Manager for Agriculture events with Azure Event Grid. Previously updated : 03/02/2023 Last updated : 04/17/2023 # Azure Data Manager for Agriculture (Preview) as Event Grid source
This article provides the properties and schema for Azure Data Manager for Agric
## Available event types
-### Farm management related event types
+### Farm management related event types (Preview)
|Event Name | Description| |:--:|:-:| |Microsoft.AgFoodPlatform.PartyChanged|Published when a `Party` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.FarmChangedV2| Published when a `Farm` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.FieldChangedV2|Published when a `Field` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.FarmChanged.V2| Published when a `Farm` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.FieldChanged.V2|Published when a `Field` is created/updated/deleted.|
|Microsoft.AgFoodPlatform.SeasonChanged|Published when a `Season` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.SeasonalFieldChangedV2|Published when a `Seasonal Field` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.BoundaryChangedV2|Published when a `Boundary` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.SeasonalFieldChanged.V2|Published when a `Seasonal Field` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.BoundaryChanged.V2|Published when a `Boundary` is created/updated/deleted.|
|Microsoft.AgFoodPlatform.CropChanged|Published when a `Crop` is created/updated/deleted.| |Microsoft.AgFoodPlatform.CropProductChanged|Published when a `Crop Product` is created /updated/deleted.|
-|Microsoft.AgFoodPlatform.AttachmentChangedV2|Published when an `Attachment` is created/updated/deleted.
-|Microsoft.AgFoodPlatform.ManagementZoneChangedV2|Published when a `Management Zone` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.ZoneChangedV2|Published when an `Zone` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.AttachmentChanged.V2|Published when an `Attachment` is created/updated/deleted.
+|Microsoft.AgFoodPlatform.ManagementZoneChanged.V2|Published when a `Management Zone` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.ZoneChanged.V2|Published when an `Zone` is created/updated/deleted.|
-### Satellite data related event types
+### Satellite data related event types (Preview)
|Event Name | Description| |:--:|:-:|
-|Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2| Published when a satellite data ingestion job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChanged.V2| Published when a satellite data ingestion job's status is changed, for example, job is created, has progressed or completed.|
-### Weather data related event types
+### Weather data related event types (Preview)
|Event Name | Description| |:--:|:-:|
-|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2|Published when a weather data ingestion job's status is changed, for example, job is created, has progressed or completed.|
-|Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2| Published when a weather data refresher job status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChanged.V2|Published when a weather data ingestion job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChanged.V2| Published when a weather data refresher job status is changed, for example, job is created, has progressed or completed.|
-### Farm activities data related event types
+### Farm activities data related event types (Preview)
|Event Name | Description| |:--:|:-:|
-|Microsoft.AgFoodPlatform.ApplicationDataChangedV2|Published when an `Application Data` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.HarvestDataChangedV2|Published when a `Harvesting Data` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.TillageDataChangedV2|Published when a `Tillage Data` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.PlantingDataChangedV2|Published when a `Planting Data` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2|Published when an image-processing rasterizes job's status is changed, for example, job is created, has progressed or completed.|
-|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2| Published when a farm operations data ingestion job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.ApplicationDataChanged.V2|Published when an `Application Data` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.HarvestDataChanged.V2|Published when a `Harvesting Data` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.TillageDataChanged.V2|Published when a `Tillage Data` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.PlantingDataChanged.V2|Published when a `Planting Data` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChanged.V2|Published when an image-processing rasterizes job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChanged.V2| Published when a farm operations data ingestion job's status is changed, for example, job is created, has progressed or completed.|
-### Sensor data related event types
+### Sensor data related event types (Preview)
|Event Name | Description| |:--:|:-:|
-|Microsoft.AgFoodPlatform.SensorMappingChangedV2|Published when a `Sensor Mapping` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2|Published when a `Sensor Partner Integration` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.SensorMappingChanged.V2|Published when a `Sensor Mapping` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.SensorPartnerIntegrationChanged.V2|Published when a `Sensor Partner Integration` is created/updated/deleted.|
|Microsoft.AgFoodPlatform.DeviceDataModelChanged|Published when `Device Data Model` is created/updated/deleted.| |Microsoft.AgFoodPlatform.DeviceChanged|Published when a `Device` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.SensorDataModelChanged|Published when a `Sensor Data Model` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.SensorDataModelChanged |Published when a `Sensor Data Model` is created/updated/deleted.|
|Microsoft.AgFoodPlatform.SensorChanged|Published when a `Sensor` is created/updated/deleted.|
-### Insight and observations related event types
+### Insight and observations related event types (Preview)
|Event Name | Description| |:--:|:-:|
-|Microsoft.AgFoodPlatform.PrescriptionChangedV2|Published when a `Prescription` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.PrescriptionMapChangedV2|Published when a `Prescription Map` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2|Published when a `Plant Tissue Analysis` data is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2|Published when a `Nutrient Analysis` data is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.InsightChangedV2| Published when an `Insight` is created/updated/deleted.|
-|Microsoft.AgFoodPlatform.InsightAttachmentChangedV2| Published when an `Insight Attachment` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.PrescriptionChanged.V2|Published when a `Prescription` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.PrescriptionMapChanged.V2|Published when a `Prescription Map` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.PlantTissueAnalysisChanged.V2|Published when a `Plant Tissue Analysis` data is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.NutrientAnalysisChanged.V2|Published when a `Nutrient Analysis` data is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.InsightChanged.V2| Published when an `Insight` is created/updated/deleted.|
+|Microsoft.AgFoodPlatform.InsightAttachmentChanged.V2| Published when an `Insight Attachment` is created/updated/deleted.|
-### Model inference jobs related event types
+### Model inference jobs related event types (Preview)
|Event Name | Description| |:--:|:-:|
-|Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2|Published when a Biomass Model job's status is changed, for example, job is created, has progressed or completed.|
-|Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2|Published when a Soil Moisture Model job's status is changed, for example, job is created, has progressed or completed.|
-|Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2|Published when a Sensor Placement Model job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.BiomassModelJobStatusChanged.V2|Published when a Biomass Model job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChanged.V2|Published when a Soil Moisture Model job's status is changed, for example, job is created, has progressed or completed.|
+|Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChanged.V2|Published when a Sensor Placement Model job's status is changed, for example, job is created, has progressed or completed.|
## Example events
event-grid Handler Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-functions.md
We recommend that you use the first approach (Event Grid trigger) as it has the
{ "resourceId": "/subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.Web/sites/<FUNCTION APP NAME>/functions/<FUNCTION NAME>", "maxEventsPerBatch": 10,
- "preferredBatchSizeInKilobytes": 6400
+ "preferredBatchSizeInKilobytes": 64
} }, "eventDeliverySchema": "EventGridSchema"
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
You receive events from a partner in a [partner topic](concepts.md#partner-topic
3. After the partner creates a partner topic in your Azure subscription and resource group, [activate](subscribe-to-partner-events.md#activate-a-partner-topic) your partner topic. 4. [Subscribe to events](subscribe-to-partner-events.md#subscribe-to-events) by creating one or more event subscriptions on the partner topic.
+ :::image type="content" source="./media/partner-events-overview/receive-events-from-partner.svg" alt-text="Diagram showing the steps to receive events from a partner.":::
-> [!NOTE]
-> You must [register the Azure Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with every Azure subscription where you want create Event Grid resources. Otherwise, operations to create resources will fail.
+ > [!NOTE]
+ > You must [register the Azure Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with every Azure subscription where you want create Event Grid resources. Otherwise, operations to create resources will fail.
## Why should I use Partner Events?
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
Title: Azure Event Grid - Subscribe to Microsoft Graph API events
+ Title: Receive Microsoft Graph change notifications through Azure Event Grid (preview)
description: This article explains how to subscribe to events published by Microsoft Graph API. Last updated 09/01/2022
-# Subscribe to events published by Microsoft Graph API
+# Receive Microsoft Graph change notifications through Azure Event Grid (preview)
+ This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the resources for which events are available through Graph API. For every resource, events for create, update and delete state changes are supported. > [!IMPORTANT]
This article describes steps to subscribe to events published by Microsoft Graph
## Why should I use Microsoft Graph API as a destination?
-Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/change-notifications-delivery) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements:
+Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/webhooks#receiving-change-notifications) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements:
- You're developing an event-driven solution that requires events from Azure Active Directory, Outlook, Teams, etc. to react to resource changes. You require the robust eventing model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md). - You want to use Event Grid to route events to multiple destinations using a single Graph API subscription and you want to avoid managing multiple Graph API subscriptions.
Here are some of the key headers and payload properties:
When you create a Graph API subscription with a `notificationUrl` bound to Event Grid, a partner topic is created in your Azure subscription. For that partner topic, you [configure event subscriptions](event-filtering.md) to send your events to any of the supported [event handlers](event-handlers.md) that best meets your requirements to process the events.
-#### Microsoft Graph API Explorer
-For quick tests and to get to know the API, you could use the [Microsoft Graph API explorer](/graph/graph-explorer/graph-explorer-features). For anything else beyond casuals tests or learning, you should use the Graph SDKs.
+#### Test APIs using Graph Explorer
+For quick tests and to get to know the API, you could use the [Graph Explorer](/graph/graph-explorer/graph-explorer-features). For anything else beyond casuals tests or learning, you should use the Microsoft Graph SDKs.
[!INCLUDE [activate-partner-topic](includes/activate-partner-topic.md)]
event-grid Webhook Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/webhook-event-delivery.md
And, follow one of these steps:
For an example of handling the subscription validation handshake, see a [C# sample](https://github.com/Azure-Samples/event-grid-dotnet-publish-consume-events/blob/master/EventGridConsumer/EventGridConsumer/Function1.cs). ## Endpoint validation with CloudEvents v1.0
-CloudEvents v1.0 implements its own abuse protection semantics using the **HTTP OPTIONS** method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses with the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
+CloudEvents v1.0 implements its own abuse protection semantics using the **HTTP OPTIONS** method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
## Event schema compatibility When a topic is created, an incoming event schema is defined. And, when a subscription is created, an outgoing event schema is defined. The following table shows you the compatibility allowed when creating a subscription.
expressroute Expressroute Howto Ipsec Transport Private Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-ipsec-transport-private-windows.md
The IPsec policy requires all HTTP connections on the destination port 8080 to u
[![49]][49] 2. To assign the security group policy to the OU **IPSecOU**, right-click the security policy and chose **Assign**.
- Every computer tht belongs to the OU will have the security group policy assigned.
+ Every computer that belongs to the OU will have the security group policy assigned.
[![50]][50]
For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute
[48]: ./media/expressroute-howto-ipsec-transport-private-windows/security-policy-completed.png "end of process of creation of the security policy" [49]: ./media/expressroute-howto-ipsec-transport-private-windows/gpo-not-assigned.png "IPsec policy linked to the GPO but not assigned" [50]: ./media/expressroute-howto-ipsec-transport-private-windows/gpo-assigned.png "IPsec policy assigned to the GPO"
-[51]: ./media/expressroute-howto-ipsec-transport-private-windows/encrypted-traffic.png "Capture of IPsec encrypted traffic"
+[51]: ./media/expressroute-howto-ipsec-transport-private-windows/encrypted-traffic.png "Capture of IPsec encrypted traffic"
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo | | **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect, Vodafone |
-| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect |
+| **Doha2** | [Ooredoo](https://www.ooredoo.qa/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect |
| **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
If you're remote and don't have fiber connectivity or want to explore other conn
* Intelsat * [SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)
-* [Viasat](http://www.directcloud.viasatbusiness.com/)
+* [Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)
| Location | Exchange | Connectivity providers | | | | |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **UOLDIVEO** |Supported |Supported | Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok | | **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
-| **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 |
+| **[Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)** | Supported | Supported | Washington DC2 |
| **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney | | **Vodacom** |Supported |Supported | Cape Town, Johannesburg| | **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, Doha, London, Milan, Singapore |
If you're remote and don't have fiber connectivity, or you want to explore other
* Intelsat * [SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)
-* [Viasat](http://www.directcloud.viasatbusiness.com/)
+* [Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)
## Connectivity through additional service providers
If you're remote and don't have fiber connectivity, or you want to explore other
| **[Flexential](https://www.flexential.com/connectivity/cloud-connect-microsoft-azure-expressroute)** | IX Reach, Megaport, PacketFabric | | **[QTS Data Centers](https://www.qtsdatacenters.com/hybrid-solutions/connectivity/azure-cloud )** | Megaport, PacketFabric | | **[Stream Data Centers](https://www.streamdatacenters.com/products-services/network-cloud/)** | Megaport |
-| **[RagingWire Data Centers](https://www.ragingwire.com/wholesale/wholesale-data-centers-worldwide-nexcenters)** | IX Reach, Megaport, PacketFabric |
+| **RagingWire Data Centers** | IX Reach, Megaport, PacketFabric |
| **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach | | **vXchnge** | IX Reach, Megaport |
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| [Count of routes advertised to peer](#advertisedroutes) | Availability | Count | Maximum | Count Of Routes Advertised To Peer by ExpressRouteGateway | roleInstance | Yes | | [Count of routes learned from peer](#learnedroutes)| Availability | Count | Maximum | Count Of Routes Learned From Peer by ExpressRouteGateway | roleInstance | Yes | | [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | Yes |
-| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Number of VMs in the Virtual Network | No Dimensions | Yes |
+| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Number of VMs in the Virtual Network | No Dimensions | Yes |
+| [Active flows](#activeflows) | Scalability | Count | Average | Number of active flows on ExpressRoute Gateway | roleInstance | Yes |
+| [Max flows created per second](#maxflows) | Scalability | FlowsPerSecond | Maximum | Maximum number of flows created per second on ExpressRoute Gateway | roleInstance, direction | Yes |
### ExpressRoute Gateway connections
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
* Count of routes advertised to peers * Count of routes learned from peers * Frequency of routes changed
-* Number of VMs in the virtual network
+* Number of VMs in the virtual network
+* Count of active flows
+* Max flows created per second
It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
This metric shows the number of virtual machines that are using the ExpressRoute
> To maintain reliability of the service, Microsoft often performs platform or OS maintenance on the gateway service. During this time, this metric may fluctuate and report inaccurately. >
+## <a name = "activeflows"></a>Active flows
+
+Aggregation type: *Avg*
+
+Split by: Gateway Instance
++
+This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
++
+## <a name = "maxflows"></a>Max flows created per second
+
+Aggregation type: *Max*
+
+Split by: Gateway Instance and Direction (Inbound/Outbound)
+
+This metric display maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
++ ## <a name = "connectionbandwidth"></a>ExpressRoute gateway connections in bits/seconds Aggregation type: *Avg*
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance description: Learn about the management groups, how their permissions work, and how to use them. Previously updated : 01/24/2023 Last updated : 04/20/2023
details on moving items within the hierarchy.
## Azure custom role definition and assignment
-Azure custom role support for management groups is currently in preview with some
-[limitations](#limitations). You can define the management group scope in the Role Definition's
-assignable scope. That Azure custom role will then be available for assignment on that management
-group and any management group, subscription, resource group, or resource under it. This custom role
-will inherit down the hierarchy like any built-in role.
+You can define a management group as an assignable scope in an Azure custom role definition.
+The Azure custom role will then be available for assignment on that management
+group and any management group, subscription, resource group, or resource under it. The custom role
+will inherit down the hierarchy like any built-in role. For information about the limitations with custom roles and management groups, see [Limitations](#limitations).
### Example definition
There are limitations that exist when using custom roles on management groups.
definition's assignable scope. If there's a typo or an incorrect management group ID listed, the role definition is still created.
-> [!IMPORTANT]
-> Adding a management group to `AssignableScopes` is currently in preview. This preview version is
-> provided without a service-level agreement, and it's not recommended for production workloads.
-> Certain features might not be supported or might have constrained capabilities. For more
-> information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Moving management groups and subscriptions To move a management group or subscription to be a child of another management group, three rules
governance Policy Safe Deployment Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/policy-safe-deployment-practices.md
+
+ Title: Safe deployment of Azure Policy assignments
+description: Learn how to apply the safe deployment practices (SDP) framework to your Azure Policy assignments.
Last updated : 04/21/2023+++
+# Safe deployment of Azure Policy assignments
+
+As your environment expands, so does the demand for a controlled continuous deployment (CD)
+pipeline with progressive exposure control. Accordingly, Microsoft recommends DevOps teams follow
+the safe deployment practices (SDP) framework. The
+safe deployment of Azure Policy definitions and assignments helps limiting the impact of
+unintended behaviors of policy resources.
+
+The high-level approach of implementing SDP with Azure Policy is to roll out policy assignments
+by rings to detect policy changes that affect the environment in early stages before it
+affects the critical cloud infrastructure.
+
+Deployment rings can be organized in diverse ways. In this how-to tutorial, rings are divided by
+different Azure regions with _Ring 0_ representing non-critical, low traffic locations
+and _Ring 5_ denoting the most critical, highest traffic locations.
+
+## Steps for safe deployment of Azure Policy assignments with deny or append effects
+
+Use the following flowchart as a reference as we work through how to apply the SDP framework to Azure
+Policy assignments that use the `deny` or `append` policy effects.
+
+> [!NOTE]
+> To learn more about Azure policy effects, see [Understand how effects work](../concepts/effects.md).
++
+Flowchart step numbers:
+
+1. Begin the release by creating a policy definition at the highest designated Azure management scope. We recommend storing Azure Policy definitions at the management group scope for maximum flexibility.
+
+2. Once you've created your policy definition, assign the policy at the highest-level scope inclusive
+of all deployment rings. Apply _resource selectors_ to narrow the applicability to the least
+critical ring by using the `"kind": "resource location"` property. Configure the `audit` effect type
+by using _assignment overrides_. Sample selector with `eastUS` location and effect as `audit`:
+
+ ```json
+ "resourceSelectors": [{
+ "name": "SDPRegions",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastUS" ]
+ }]
+ }],
+ "overrides":[{
+ "kind": "policyEffect",
+ "value": "Audit"
+ }]
+ ```
+
+3. Once the assignment is deployed and the initial compliance scan has completed,
+validate that the compliance result is as expected.
+
+ You should also configure automated tests that run compliance checks. A compliance check should
+ encompass the following logic:
+
+ - Gather compliance results
+ - If compliance results are as expected, the pipeline should continue
+ - If compliance results aren't as expected, the pipeline should fail and you should start debugging
+
+ For example, you can configure the compliance check by using other tools within
+ your particular continuous integration/continuous deployment (CI/CD) pipeline.
+
+ At each rollout stage, the application health checks should confirm the stability of the service
+ and impact of the policy. If the results aren't as expected due to application configuration,
+ refactor the application as appropriate.
+
+4. Repeat by expanding the resource selector property values to include the next rings'
+locations and validating the expected compliance results and application health. Example selector with an added location value:
+
+ ```json
+ "resourceSelectors": [{
+ "name": "SDPRegions",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastUS", "westUS"]
+ }]
+ }]
+ ```
+
+5. Once you have successfully assigned the policy to all rings using `audit` mode,
+the pipeline should trigger a task that changes the policy effect to `deny` and reset
+the resource selectors to the location associated with _Ring 0_. Example selector with one region and effect set to deny:
+
+ ```json
+ "resourceSelectors": [{
+ "name": "SDPRegions",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastUS" ]
+ }]
+ }],
+ "overrides":[{
+ "kind": "policyEffect",
+ "value": "Deny"
+ }]
+ ```
+
+6. Once the effect is changed, automated tests should check whether enforcement is taking place as
+expected.
+
+7. Repeat by including more rings in your resource selector configuration.
+
+8. Repeat this process for all production rings.
+
+## Steps for safe deployment of Azure Policy assignments with modify or deployIfNotExists effects
+
+Steps 1-4 for policies using the `modify` or `deployIfNotExists` effects are the same as steps previously explained.
+Review the following flowchart with modified steps 5-9:
++
+Flowchart step numbers:
+
+5. Once you've assigned the policy to all rings using `audit` mode, the pipeline should trigger
+a task that changes the policy effect to `modify` or `deployIfNotExists` and resets
+the resource selectors to _Ring 0_.
+
+6. Automated tests should then check whether the enforcement works as expected.
+
+7. The pipeline should trigger a remediation task that corrects existing resources in that given ring.
+
+8. After the remediation task is complete, automated tests should verify the remediation works
+as expected using compliance and application health checks.
+
+9. Repeat by including more locations in your resource selector configuration. Then repeat all for production rings.
+
+> [!NOTE]
+> For more information on Azure policy remediation tasks, read [Remediate non-compliant resources with Azure Policy](./remediate-resources.md).
+
+## Next steps
+
+- Learn how to [programmatically create policies](./programmatically-create.md).
+- Review [Azure Policy as code workflows](../concepts/policy-as-code.md).
+- Study Microsoft's guidance concerning [safe deployment practices](/devops/operate/safe-deployment-practices).
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
You can use the data export and rules capabilities in IoT Central to integrate w
- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md)
-You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](../../azure-video-analyzer/video-analyzer-docs/overview.md).
+You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](/previous-versions/azure/azure-video-analyzer/video-analyzer-docs/articles/azure-video-analyzer/video-analyzer-docs/overview).
## Integrate with companion applications
iot-edge Configure Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-device.md
Title: Configure Azure IoT Edge device settings
description: This article shows you how to configure Azure IoT Edge device settings and options using the config.toml file. Previously updated : 3/6/2023 Last updated : 04/20/2023
This article shows settings and options for configuring the IoT Edge *config.toml* file of an IoT Edge device. IoT Edge uses the *config.toml* file to initialize settings for the device. Each of the sections of the *config.toml* file has several options. Not all options are mandatory, as they apply to specific scenarios.
-A template containing all options can be found in the *config.toml.edge.template* file within the */etc/aziot* directory on an IoT Edge device. You have the option to copy the contents of the whole template or sections of the template into your *config.toml* file. Uncomment the sections you need. Be aware not to copy over parameters you have already defined.
+A template containing all options can be found in the *config.toml.edge.template* file within the */etc/aziot* directory on an IoT Edge device. You can copy the contents of the whole template or sections of the template into your *config.toml* file. Uncomment the sections you need. Be aware not to copy over parameters you have already defined.
## Global parameters
-The `hostname`, `parent_hostname`, `trust_bundle_cert`, and `allow_elevated_docker_permissions` parameters must be at the beginning of the configuration file before any other sections. Adding parameters before defined sections ensures they're applied correctly. For more information on valid syntax, see [toml.io ](https://toml.io/).
+The **hostname**, **parent_hostname**, **trust_bundle_cert**, **allow_elevated_docker_permissions**, and **auto_reprovisioning_mode** parameters must be at the beginning of the configuration file before any other sections. Adding parameters before a collection of settings ensures they're applied correctly. For more information on valid syntax, see [toml.io ](https://toml.io/).
### Hostname
For more information about the IoT Edge trust bundle, see [Manage trusted root C
### Elevated Docker Permissions
-Some docker capabilities can be used to gain root access. By default, the **--privileged** flag and all capabilities listed in the **CapAdd** parameter of the docker **HostConfig** are allowed.
+Some docker capabilities can be used to gain root access. By default, the `--privileged` flag and all capabilities listed in the **CapAdd** parameter of the docker **HostConfig** are allowed.
If no modules require privileged or extra capabilities, use **allow_elevated_docker_permissions** to improve the security of the device.
If no modules require privileged or extra capabilities, use **allow_elevated_doc
allow_elevated_docker_permissions = false ```
+### Auto reprovisioning mode
+
+The optional **auto_reprovisioning_mode** parameter specifies the conditions that decide when a device attempts to automatically reprovision with Device Provisioning Service. Auto provisioning mode is ignored if the device has been provisioned manually. For more information about setting DPS provisioning mode, see the [Provisioning](#provisioning) section in this article for more information.
+
+One of the following values can be set:
+
+| Mode | Description |
+||-|
+| Dynamic | Reprovision when the device detects that it may have been moved from one IoT Hub to another. This mode is *the default*. |
+| AlwaysOnStartup | Reprovision when the device is rebooted or a crash causes the daemons to restart. |
+| OnErrorOnly | Never trigger device reprovisioning automatically. Device reprovisioning only occurs as fallback, if the device is unable to connect to IoT Hub during identity provisioning due to connectivity errors. This fallback behavior is implicit in Dynamic and AlwaysOnStartup modes as well. |
+
+For example:
+
+```toml
+auto_reprovisioning_mode = "Dynamic"
+```
+
+For more information about device reprovisioning, see [IoT Hub Device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+ ## Provisioning You can provision a single device or multiple devices at-scale, depending on the needs of your IoT Edge solution. The options available for authenticating communications between your IoT Edge devices and your IoT hubs depend on what provisioning method you choose.
cloud_timeout_sec = 10
cloud_retries = 1 ```
-### Optional auto reprovisioning mode
-
-The **auto_reprovisioning_mode** parameter specifies the conditions that decide when a device attempts to automatically reprovision with Device Provisioning Service. It's ignored if the device has been provisioned manually. One of the following values can be set:
-
-| Mode | Description |
-||-|
-| Dynamic | Reprovision when the device detects that it may have been moved from one IoT Hub to another. This mode is *the default*. |
-| AlwaysOnStartup | Reprovision when the device is rebooted or a crash causes the daemons to restart. |
-| OnErrorOnly | Never trigger device reprovisioning automatically. Device reprovisioning only occurs as fallback, if the device is unable to connect to IoT Hub during identity provisioning due to connectivity errors. This fallback behavior is implicit in Dynamic and AlwaysOnStartup modes as well. |
-
-For example:
-
-```toml
-auto_reprovisioning_mode = Dynamic
-```
-
-For more information about device reprovisioning, see [IoT Hub Device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
- ## Certificate issuance If you configured any dynamically issued certs, choose your corresponding issuance method and replace the sample values with your own.
identity_pk = "pkcs11:slot-id=0;object=est-id?pin-value=1234" # PKCS#11 URI
### EST ID cert requested via EST bootstrap ID cert
-Authentication with a TLS client certificate which are used once to create the initial EST ID certificate. After the first certificate issuance, an `identity_cert` and `identity_pk` are automatically created and used for future authentication and renewals. The Subject Common Name (CN) of the generated EST ID certificate is always the same as the configured device ID under the provisioning section. These files must be readable by the users aziotcs and aziotks, respectively.
+Authentication with a TLS client certificate that is used once to create the initial EST ID certificate. After the first certificate issuance, an `identity_cert` and `identity_pk` are automatically created and used for future authentication and renewals. The Subject Common Name (CN) of the generated EST ID certificate is always the same as the configured device ID under the provisioning section. These files must be readable by the users *aziotcs* and *aziotks*, respectively.
```toml bootstrap_identity_cert = "file:///var/aziot/certs/est-bootstrap-id.pem"
The TPM index persists the DPS authentication key. The index is taken as an offs
auth_key_index = "0x00_01_00" ```
-Use authorization values for endorsement and owner hierarchies, if needed. By default, these are empty strings.
+Use authorization values for endorsement and owner hierarchies, if needed. By default, these values are empty strings.
```toml [tpm.hierarchy_authorization]
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
To enable secure connections, every IoT Edge parent device in a gateway scenario
```bash # Update the certificate store
- # For Ubuntu and Debian, use update-ca-certificates command
+ # For Ubuntu or Debian - use update-ca-certificates
sudo update-ca-certificates
- # For EFLOW, use update-ca-trust
+ # For EFLOW or RHEL - use update-ca-trust
sudo update-ca-trust ```
To enable secure connections, every IoT Edge downstream device in a gateway scen
```bash # Update the certificate store
- # For Ubuntu and Debian, use update-ca-certificates command
+ # For Ubuntu or Debian - use update-ca-certificates
sudo update-ca-certificates
- # For EFLOW, use update-ca-trust
+ # For EFLOW or RHEL - use update-ca-trust
sudo update-ca-trust ```
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Using a self-signed certificate authority (CA) certificate as a root of trust wi
Installing the certificate to the trust bundle file makes it available to container modules but not to host modules like Azure Device Update or Defender. If you use host level components or run into other TLS issues, also install the root CA certificate to the operating system certificate store:
-# [Linux](#tab/linux)
+# [Debian / Ubuntu](#tab/ubuntu)
```bash sudo cp /var/aziot/certs/my-root-ca.pem /usr/local/share/ca-certificates/my-root-ca.pem.crt
- sudo update-ca-trust
+ sudo update-ca-certificates
```
-# [IoT Edge for Linux on Windows (EFLOW)](#tab/windows)
+# [EFLOW / RHEL](#tab/windows)
```bash sudo cp /var/aziot/certs/my-root-ca.pem /etc/pki/ca-trust/source/anchors/my-root-ca.pem.crt
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
tags: azure-resource-manager
Previously updated : 12/12/2022 Last updated : 04/18/2023 # Azure Key Vault keys, secrets and certificates overview
-Azure Key Vault enables Microsoft Azure applications and users to store and use several types of secret/key data. Key Vault resource provider supports two resource types: vaults and managed HSMs.
+Azure Key Vault enables Microsoft Azure applications and users to store and use several types of secret/key data: keys, secrets, and certificates. Keys, secrets, and certificates are collectively referred to as "objects".
-## DNS suffixes for base URL
- This table shows the base URL DNS suffix used by the data-plane endpoint for vaults and managed HSM pools in various cloud environments.
+## Object identifiers
+Objects are uniquely identified within Key Vault using a case-insensitive identifier called the object identifier. No two objects in the system have the same identifier, regardless of geo-location. The identifier consists of a prefix that identifies the key vault, object type, user provided object name, and an object version. Identifiers that don't include the object version are referred to as "base identifiers". Key Vault object identifiers are also valid URLs, but should always be compared as case-insensitive strings.
+
+For more information, see [Authentication, requests, and responses](authentication-requests-and-responses.md)
+
+An object identifier has the following general format (depending on container type):
+
+- **For Vaults**:
+`https://{vault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}`
+
+- **For Managed HSM pools**:
+`https://{hsm-name}.managedhsm.azure.net/{object-type}/{object-name}/{object-version}`
+
+> [!NOTE]
+> See [Object type support](#object-types) for types of objects supported by each container type.
+
+Where:
+
+| Element | Description |
+|-|-|
+| `vault-name` or `hsm-name` | The name for a key vault or a Managed HSM pool in the Microsoft Azure Key Vault service.<br /><br />Vault names and Managed HSM pool names are selected by the user and are globally unique.<br /><br />Vault name and Managed HSM pool name must be a 3-24 character string, containing only 0-9, a-z, A-Z, and not consecutive -.|
+| `object-type` | The type of the object, "keys", "secrets", or "certificates".|
+| `object-name` | An `object-name` is a user provided name for and must be unique within a key vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -.|
+| `object-version `| An `object-version` is a system-generated, 32 character string identifier that is optionally used to address a unique version of an object. |
+
+## DNS suffixes for object identifiers
+The Azure Key Vault resource provider supports two resource types: vaults and managed HSMs. This table shows the DNS suffix used by the data-plane endpoint for vaults and managed HSM pools in various cloud environments.
Cloud environment | DNS suffix for vaults | DNS suffix for managed HSMs ||
Azure US Government | .vault.usgovcloudapi.net | Not supported
Azure German Cloud | .vault.microsoftazure.de | Not supported ## Object types
- This table shows object types and their suffixes in the base URL.
+ This table shows object types and their suffixes in the object identifier.
-Object type|URL Suffix|Vaults|Managed HSM Pools
+Object type|Identifier Suffix|Vaults|Managed HSM Pools
--|--|--|-- **Cryptographic keys**|| HSM-protected keys|/keys|Supported|Supported
Refer to the JOSE specifications for relevant data types for keys, encryption, a
## Objects, identifiers, and versioning
-Objects stored in Key Vault are versioned whenever a new instance of an object is created. Each version is assigned a unique identifier and URL. When an object is first created, it's given a unique version identifier and marked as the current version of the object. Creation of a new instance with the same object name gives the new object a unique version identifier, causing it to become the current version.
+Objects stored in Key Vault are versioned whenever a new instance of an object is created. Each version is assigned a unique object identifier. When an object is first created, it's given a unique version identifier and marked as the current version of the object. Creation of a new instance with the same object name gives the new object a unique version identifier, causing it to become the current version.
Objects in Key Vault can be retrieved by specifying a version or by omitting version to get latest version of the object. Performing operations on objects requires providing version to use specific version of the object. > [!NOTE] > The values you provide for Azure resources or object IDs may be copied globally for the purpose of running the service. The value provided should not include personally identifiable or sensitive information.
-### Vault-name and Object-name
-Objects are uniquely identified within Key Vault using a URL. No two objects in the system have the same URL, regardless of geo-location. The complete URL to an object is called the Object Identifier. The URL consists of a prefix that identifies the Key Vault, object type, user provided Object Name, and an Object Version. The Object Name is case-insensitive and immutable. Identifiers that don't include the Object Version are referred to as Base Identifiers.
-
-For more information, see [Authentication, requests, and responses](authentication-requests-and-responses.md)
-
-An object identifier has the following general format (depending on container type):
--- **For Vaults**:
-`https://{vault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}`
--- **For Managed HSM pools**:
-`https://{hsm-name}.managedhsm.azure.net/{object-type}/{object-name}/{object-version}`
-
-> [!NOTE]
-> See [Object type support](#object-types) for types of objects supported by each container type.
-
-Where:
-
-| Element | Description |
-|-|-|
-|`vault-name` or `hsm-name`|The name for a vault or a Managed HSM pool in the Microsoft Azure Key Vault service.<br /><br />Vault names and Managed HSM pool names are selected by the user and are globally unique.<br /><br />Vault name and Managed HSM pool name must be a 3-24 character string, containing only 0-9, a-z, A-Z, and not consecutive -.|
-|`object-type`|The type of the object, "keys", "secrets", or 'certificates'.|
-|`object-name`|An `object-name` is a user provided name for and must be unique within a Key Vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -.|
-|`object-version`|An `object-version` is a system-generated, 32 character string identifier that is optionally used to address a unique version of an object.|
- ## Next steps - [About keys](../keys/about-keys.md)
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| | Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
-| Azure NetApps | [Configure customer-managed keys for Azure NetApp Files volume encryption](../../azure-netapp-files/configure-customer-managed-keys.md)
| Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).| | Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)| | Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).|
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
For more information about outbound rules, see [Outbound rules](outbound-rules.m
:::image type="content" source="./media/load-balancer-outbound-connections/nat-gateway.png" alt-text="Diagram of a NAT gateway and public load balancer.":::
-Virtual Network NAT simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines. NAT is fully managed and highly resilient.
+Azure NAT Gateway simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines. NAT Gateway is fully managed and highly resilient.
Using a NAT gateway is the best method for outbound connectivity. A NAT gateway is highly extensible, reliable, and doesn't have the same concerns of SNAT port exhaustion.
-For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md).
+For more information about Azure NAT Gateway, see [What is Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md).
## 3. Assign a public IP to the virtual machine
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
It's important to optimize your Azure deployments for outbound connectivity. Opt
### Use a NAT gateway for outbound connectivity to the Internet
-Virtual network NAT gateway is a highly resilient and scalable Azure service that provides outbound connectivity to the internet from your virtual network. A NAT gatewayΓÇÖs unique method of consuming SNAT ports helps resolve common SNAT exhaustion and connection issues. For more information about Azure Virtual Network NAT, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+Azure NAT Gateway is a highly resilient and scalable Azure service that provides outbound connectivity to the internet from your virtual network. A NAT gatewayΓÇÖs unique method of consuming SNAT ports helps resolve common SNAT exhaustion and connection issues. For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md).
* **How does a NAT gateway reduce the risk of SNAT port exhaustion?**
Virtual network NAT gateway is a highly resilient and scalable Azure service tha
A NAT gateway makes available SNAT ports accessible to every instance in a subnet. This dynamic allocation allows VM instances to use the number of SNAT ports each needs from the available pool of ports for new connections. The dynamic allocation reduces the risk of SNAT exhaustion.
- :::image type="content" source="./media/troubleshoot-outbound-connection/load-balancer-vs-nat.png" alt-text="Diagram of Azure Load Balancer vs. Azure Virtual Network NAT.":::
+ :::image type="content" source="./media/troubleshoot-outbound-connection/load-balancer-vs-nat.png" alt-text="Diagram of Azure Load Balancer vs. Azure NAT Gateway.":::
* **Port selection and reuse behavior.** A NAT gateway selects ports at random from the available pool of ports. If there aren't available ports, SNAT ports will be reused as long as there's no existing connection to the same destination public IP and port. This port selection and reuse behavior of a NAT gateway makes it less likely to experience connection timeouts.
- To learn more about how SNAT and port usage works for NAT gateway, see [SNAT fundamentals](../virtual-network/nat-gateway/nat-gateway-resource.md#fundamentals). There are a few conditions in which you won't be able to use NAT gateway for outbound connections. For more information on NAT gateway limitations, see [Virtual Network NAT limitations](../virtual-network/nat-gateway/nat-gateway-resource.md#limitations).
+ To learn more about how SNAT and port usage works for NAT gateway, see [SNAT fundamentals](../virtual-network/nat-gateway/nat-gateway-resource.md#fundamentals). There are a few conditions in which you won't be able to use NAT gateway for outbound connections. For more information on NAT gateway limitations, see [NAT Gateway limitations](../virtual-network/nat-gateway/nat-gateway-resource.md#limitations).
If you're unable to use a NAT gateway for outbound connectivity, refer to the other migration options described in this article.
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
This command also installs the required Az PowerShell module.
### Install with the script directly
-If you do have Az PowerShell module installed and can't uninstall them, or don't want to uninstall them,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download).
+If you do have Az PowerShell module installed and can't uninstall them, or don't want to uninstall them, you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download).
To run the script:
The following scenarios explain how you add VMs to the backend pools of the newl
### Create a NAT gateway for outbound access
-The script creates an outbound rule that enables outbound connectivity. Azure Virtual Network NAT is the recommended service for outbound connectivity. For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+The script creates an outbound rule that enables outbound connectivity. Azure NAT Gateway is the recommended service for outbound connectivity. For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md).
To create a NAT gateway resource and associate it with a subnet of your virtual network see, [Create NAT gateway](quickstart-load-balancer-standard-public-portal.md#create-nat-gateway).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
You can also mount [datastores and datasets](v1/concept-azure-machine-learning-a
:::moniker-end ## Create
-Follow the steps in the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md) to create a basic compute instance.
+Follow the steps in [Create resources you need to get started](quickstart-create-resources.md) to create a basic compute instance.
For more options, see [create a new compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create).
You can use compute instance as a local inferencing deployment target for test/d
## Next steps
-* [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
+* [Create resources you need to get started](quickstart-create-resources.md).
* [Tutorial: Train your first ML model](tutorial-1st-experiment-sdk-train.md) shows how to use a compute instance with an integrated notebook.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
To get started with Azure Machine Learning, see:
+ [What is Azure Machine Learning?](overview-what-is-azure-machine-learning.md) + [Create and manage a workspace](how-to-manage-workspace.md) + [Recover a workspace after deletion (soft-delete)](concept-soft-delete.md)
-+ [Tutorial: Get started with Azure Machine Learning](quickstart-create-resources.md)
++ [Get started with Azure Machine Learning](quickstart-create-resources.md) + [Tutorial: Create your first classification model with automated machine learning](tutorial-first-experiment-automated-ml.md) + [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md)
machine-learning Ubuntu Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/ubuntu-upgrade.md
Previously updated : 10/04/2022+ Last updated : 04/19/2023 # Upgrade your Data Science Virtual Machine to Ubuntu 20.04
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
The Azure Machine Learning [compute instance](concept-compute-instance.md) is a
There's nothing to install or configure for a compute instance.
-Create one anytime from within your Azure Machine Learning workspace. Provide just a name and specify an Azure VM type. Try it now with this [Tutorial: Setup environment and workspace](quickstart-create-resources.md).
+Create one anytime from within your Azure Machine Learning workspace. Provide just a name and specify an Azure VM type. Try it now with [Create resources to get started](quickstart-create-resources.md).
To learn more about compute instances, including how to install packages, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
The dedicated cores per region per VM family quota and total regional quota, whi
The compute autoscales down to zero nodes when it isn't used. Dedicated VMs are created to run your jobs as needed.
-The fastest way to create a compute cluster is to follow the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
-Or use the following examples to create a compute cluster with more options:
+Use the following examples to create a compute cluster:
# [Python SDK](#tab/python)
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
If you don't have an Azure subscription, create a free account before you begin.
## Prerequisites
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
+* Complete the [Create resources to get started](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
* A Python environment in which you've installed Azure Machine Learning Python SDK v2 - [install instructions](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk#getting-started) - check the getting started section. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training. * Clone examples repository
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Creating a compute instance is a one time process for your workspace. You can re
The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you'll be able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
-The fastest way to create a compute instance is to follow the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
+The fastest way to create a compute instance is to follow the [Create resources you need to get started](quickstart-create-resources.md).
Or use the following examples to create a compute instance with more options:
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
This tutorial uses [Azure Machine Learning Python SDK v2](/python/api/overview/a
## Prerequisites
-Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
-* Create a workspace
-* Create a cloud-based compute instance to use for your development environment
-* Create a cloud-based compute cluster to use for training your model
+* Complete the [Create resources to get started](quickstart-create-resources.md) to:
+ * Create a workspace
+ * Create a cloud-based compute instance to use for your development environment
+
+* [Create a cloud-based compute cluster](how-to-create-attach-compute-cluster.md#create) to use for training your model
## Step 1: Get the code
machine-learning How To Enable Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-preview-features.md
Some preview features provide access to entire new functionality while others ma
## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Quickstart: Create workspace resources](quickstart-create-resources.md).
+* An Azure Machine Learning workspace. For more information, see [Create resources to get started](quickstart-create-resources.md).
## How do I enable preview features?
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
For example, choose "Indent using spaces" if you want your editor to auto-indent
Your workspace contains a **Samples** folder with notebooks designed to help you explore the SDK and serve as examples for your own machine learning projects. Clone these notebooks into your own folder to run and edit them.
-For an example, see [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md#clone-tutorials-folder).
- ## Share files Copy and paste the URL to share a file. Only other users of the workspace can access this URL. Learn more about [granting access to your workspace](how-to-assign-roles.md).
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
Title: Create and manage registries (preview)
-description: Learn how create registries with the CLI, Azure portal and Azure Machine Learning Studio
+description: Learn how create registries with the CLI, REST API, Azure portal and Azure Machine Learning studio
--++ Previously updated : 09/21/2022 Last updated : 04/12/2023
Azure Machine Learning entities can be grouped into two broad categories:
Assets lend themselves to being stored in a central repository and used in different workspaces, possibly in different regions. Resources are workspace specific.
-Azure Machine Learning registries (preview) enable you to create and use those assets in different workspaces. Registries support multi-region replication for low latency access to assets, so you can use assets in workspaces located in different Azure regions. Creating a registry will provision Azure resources required to facilitate replication. First, Azure blob storage accounts in each supported region. Second, a single Azure Container Registry with replication enabled to each supported region.
+Azure Machine Learning registries (preview) enable you to create and use those assets in different workspaces. Registries support multi-region replication for low latency access to assets, so you can use assets in workspaces located in different Azure regions. Creating a registry provisions Azure resources required to facilitate replication. First, Azure blob storage accounts in each supported region. Second, a single Azure Container Registry with replication enabled to each supported region.
:::image type="content" source="./media/how-to-manage-registries/machine-learning-registry-block-diagram.png" alt-text="Diagram of the relationships between assets in workspace and registry.":::
Create the YAML definition and name it `registry.yml`.
```YAML name: DemoRegistry1
-description: Basic registry with one primary region and to additional regions
tags:
+ description: Basic registry with one primary region and to additional regions
foo: bar location: eastus replication_locations:
You can create registries in Azure Machine Learning studio using the following s
1. Review the information and select __Create__. ++
+# [REST API](#tab/rest)
+
+> [!TIP]
+> You need the **curl** utility to complete this step. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`.
+
+To authenticate REST API calls, you need an authentication token for your Azure user account. You can use the following command to retrieve a token:
+
+```azurecli
+az account get-access-token
+```
+
+The response should provide an access token good for one hour. Make note of the token, as you use it to authenticate all administrative requests. The following JSON is a sample response:
+
+> [!TIP]
+> The value of the `access_token` field is the token.
+
+```json
+{
+ "access_token": "YOUR-ACCESS-TOKEN",
+ "expiresOn": "<expiration-time>",
+ "subscription": "<subscription-id>",
+ "tenant": "your-tenant-id",
+ "tokenType": "Bearer"
+}
+```
+
+To create a registry, use the following command. You can edit the JSON to change the inputs as needed. Replace the `<YOUR-ACCESS-TOKEN>` value with the access token retrieved previously:
+
+```bash
+curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2022-12-01-preview -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
+{
+ "properties":
+ {
+ "regionDetails":
+ [
+ {
+ "location": "eastus",
+ "storageAccountDetails":
+ [
+ {
+ "systemCreatedStorageAccount":
+ {
+ "storageAccountType": "Standard_LRS"
+ }
+ }
+ ],
+ "acrDetails":
+ [
+ {
+ "systemCreatedAcrAccount":
+ {
+ "acrAccountSku": "Premium"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "location": "eastus"
+}
+'
+```
+
+You should receive a `202 Accepted` response.
+ + ## Specify storage account type and SKU (optional) > [!TIP]
Next, decide if you want to use an [Azure Blob storage](../storage/blobs/storage
> [!NOTE] >The `hns` portion of `storage_account_hns` refers to the [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) capability of Azure Data Lake Storage Gen2 accounts.
-Below is an example YAML that demonstrates this advanced storage configuration:
+The following example YAML file demonstrates this advanced storage configuration:
```YAML name: DemoRegistry2
-description: Registry with additional configuration for storage accounts
tags:
+ description: Registry with additional configuration for storage accounts
foo: bar location: eastus replication_locations:
Permission | Description
Microsoft.MachineLearningServices/registries/write| Allows the user to create or update registries Microsoft.MachineLearningServices/registries/delete | Allows the user to delete registries - ## Next steps
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
In this article, you learn how to:
## Prerequisites - An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/)-- An [Azure Machine Learning Workspace](quickstart-create-resources.md).
+- An [Azure Machine Learning workspace](quickstart-create-resources.md).
- Administrative REST requests use service principal authentication. Follow the steps in [Set up authentication for Azure Machine Learning resources and workflows](./how-to-setup-authentication.md#service-principal-authentication) to create a service principal in your workspace - The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`.
machine-learning How To R Deploy R Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-deploy-r-model.md
These steps assume you have an Azure Container Registry associated with your wor
Once you have verified that you have at least one custom environment, use the following steps to build a container.
-1. Open a terminal window and sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-compute-instance), use:
+1. Open a terminal window and sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-a-compute-instance), use:
```azurecli az login --identity
machine-learning How To R Interactive Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-interactive-development.md
Many R users also use RStudio, a popular IDE. You can install RStudio or Posit W
- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today - An [Azure Machine Learning workspace and a compute instance](quickstart-create-resources.md)-- A basic understand of using Jupyter notebooks in Azure Machine Learning studio. For more information, see [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md)
+- A basic understand of using Jupyter notebooks in Azure Machine Learning studio. For more information, see [Model development on a cloud workstation](tutorial-cloud-workstation.md).
## Run R in a notebook in studio
You'll use a notebook in your Azure Machine Learning workspace, on a compute ins
1. Create a new notebook, named **RunR.ipynb** > [!TIP]
- > If you're not sure how to create and work with notebooks in studio, review [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md).
+ > If you're not sure how to create and work with notebooks in studio, review [Run Jupyter notebooks in your workspace](how-to-run-jupyter-notebooks.md)
1. Select the notebook. 1. On the notebook toolbar, make sure your compute instance is running. If not, start it now.
machine-learning How To R Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-train-model.md
This article explains how to take the R script that you [adapted to run in produ
- An [Azure Machine Learning workspace](quickstart-create-resources.md). - [A registered data asset](how-to-create-data-assets.md) that your training job will use. - Azure [CLI and ml extension installed](how-to-configure-cli.md). Or use a [compute instance in your workspace](quickstart-create-resources.md), which has the CLI pre-installed.-- [A compute cluster](how-to-create-attach-compute-cluster.md) or [compute instance](quickstart-create-resources.md#create-compute-instance) to run your training job.
+- [A compute cluster](how-to-create-attach-compute-cluster.md) or [compute instance](quickstart-create-resources.md#create-a-compute-instance) to run your training job.
- [An R environment](how-to-r-modify-script-for-production.md#create-an-environment) for the compute cluster to use to run the job. ## Create a folder with this structure
To submit the job, run the following commands in a terminal window:
cd r-job-azureml ```
-1. Sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-compute-instance), use:
+1. Sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-a-compute-instance), use:
```azurecli az login --identity
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
To benefit from this article, you'll need to:
- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/). - Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook. - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - Complete [Create resources to get started](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-keras**. - Your Jupyter notebook server - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
Azure Machine Learning provides multiple ways to submit ML training jobs. In thi
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* An Azure Machine Learning workspace. If you don't have one, you can use the steps in the [Quickstart: Create Azure Machine Learning resources](quickstart-create-resources.md) article.
+* An Azure Machine Learning workspace. If you don't have one, you can use the steps in the [Create resources to get started](quickstart-create-resources.md) article.
# [Python SDK](#tab/python)
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
Whether you're training a machine learning scikit-learn model from the ground-up
You can run the code for this article in either an Azure Machine Learning compute instance, or your own Jupyter Notebook. - Azure Machine Learning compute instance
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a compute instance. Every compute instance includes a dedicated notebook server pre-loaded with the SDK and the notebooks sample repository.
+ - Complete [Create resources to get started](quickstart-create-resources.md) to create a compute instance. Every compute instance includes a dedicated notebook server pre-loaded with the SDK and the notebooks sample repository.
- Select the notebook tab in the Azure Machine Learning studio. In the samples training folder, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > jobs > single-step > scikit-learn > train-hyperparameter-tune-deploy-with-sklearn**. - You can use the pre-populated code in the sample training folder to complete this tutorial.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
To benefit from this article, you'll need to:
- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/). - Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook. - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - Complete the [Create resources to get started](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow**. - Your Jupyter notebook server - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Before following the steps in this article, make sure you have the following pre
* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* An Azure Machine Learning workspace. If you don't have one, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.
+* An Azure Machine Learning workspace. If you don't have one, use the steps in the [Create resources to get started](quickstart-create-resources.md) article to create one.
-* An Azure Key Vault. If you used the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create your workspace, a key vault was created for you. You can also create a separate key vault instance using the information in the [Quickstart: Create a key vault](../key-vault/general/quick-create-portal.md) article.
+* An Azure Key Vault. If you used the [Create resources to get started](quickstart-create-resources.md) article to create your workspace, a key vault was created for you. You can also create a separate key vault instance using the information in the [Quickstart: Create a key vault](../key-vault/general/quick-create-portal.md) article.
> [!TIP] > You do not have to use same key vault as the workspace.
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Title: "Quickstart: Create workspace resources"
+ Title: "Create workspace resources"
description: Create an Azure Machine Learning workspace and cloud resources that can be used to train machine learning models. -+ Previously updated : 08/26/2022 Last updated : 03/15/2023 adobe-target: true #Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
-# Quickstart: Create workspace resources you need to get started with Azure Machine Learning
+# Create resources you need to get started
-In this quickstart, you'll create a workspace and then add compute resources to the workspace. You'll then have everything you need to get started with Azure Machine Learning.
-
-The workspace is the top-level resource for your machine learning activities, providing a centralized place to view and manage the artifacts you create when you use Azure Machine Learning. The compute resources provide a pre-configured cloud-based environment you can use to train, deploy, automate, manage, and track machine learning models.
+In this article, you'll create the resources you need to start working with Azure Machine Learning.
+* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create.
+* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/machine-learning).
## Create the workspace
-If you already have a workspace, skip this section and continue to [Create a compute instance](#create-compute-instance).
+The workspace is the top-level resource for your machine learning activities, providing a centralized place to view and manage the artifacts you create when you use Azure Machine Learning.
+
+If you already have a workspace, skip this section and continue to [Create a compute instance](#create-a-compute-instance).
If you don't yet have a workspace, create one now: + 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com) 1. Select **Create workspace** 1. Provide the following information to configure your new workspace:
If you don't yet have a workspace, create one now:
> [!NOTE] > This creates a workspace along with all required resources. If you would like to reuse resources, such as Storage Account, Azure Container Registry, Azure KeyVault, or Application Insights, use the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices) instead.
-## Create compute instance
+## Create a compute instance
-You could install Azure Machine Learning on your own computer. But in this quickstart, you'll create an online compute resource that has a development environment already installed and ready to go. You'll use this online machine, a *compute instance*, for your development environment to write and run code in Python scripts and Jupyter notebooks.
+You'll use the *compute instance* to run Jupyter notebooks and Python scripts in the rest of the tutorials. If you don't yet have a compute instance, create one now:
-Create a *compute instance* to use this development environment for the rest of the tutorials and quickstarts.
+1. On the left navigation, select **Notebooks**.
+1. Select **Create compute** in the middle of the page.
-1. If you didn't just create a workspace in the previous section, sign in to [Azure Machine Learning studio](https://ml.azure.com) now, and select your workspace.
-1. On the left side, select **Compute**.
+ :::image type="content" source="media/quickstart-create-resources/create-compute.png" alt-text="Screenshot shows create compute in the middle of the screen.":::
- :::image type="content" source="media/quickstart-create-resources/compute-section.png" alt-text="Screenshot: shows Compute section on left hand side of screen." lightbox="media/quickstart-create-resources/compute-section.png":::
+ > [!TIP]
+ > You'll only see this option if you don't yet have a compute instance in your workspace.
-1. Select **+New** to create a new compute instance.
-1. Supply a name, Keep all the defaults on the first page.
+1. Supply a name. Keep all the defaults on the first page.
+1. Keep the default values for the rest of the page.
1. Select **Create**.
-
-In about two minutes, you'll see the **State** of the compute instance change from *Creating* to *Running*. It's now ready to go.
-
-## Create compute clusters
-
-Next you'll create a compute cluster. You'll submit code to this cluster to distribute your training or batch inference processes across a cluster of CPU or GPU compute nodes in the cloud.
-
-Create a compute cluster that will autoscale between zero and four nodes:
-
-1. Still in the **Compute** section, in the top tab, select **Compute clusters**.
-1. Select **+New** to create a new compute cluster.
-1. Keep all the defaults on the first page, select **Next**. If you don't see any available compute, you'll need to request a quota increase. Learn more about [managing and increasing quotas](how-to-manage-quotas.md).
-1. Name the cluster **cpu-cluster**. If this name already exists, add your initials to the name to make it unique.
-1. Leave the **Minimum number of nodes** at 0.
-1. Change the **Maximum number of nodes** to 4 if possible. Depending on your settings, you may have a smaller limit.
-1. Change the **Idle seconds before scale down** to 2400.
-1. Leave the rest of the defaults, and select **Create**.
-
-In less than a minute, the **State** of the cluster will change from *Creating* to *Succeeded*. The list shows the provisioned compute cluster, along with the number of idle nodes, busy nodes, and unprovisioned nodes. Since you haven't used the cluster yet, all the nodes are currently unprovisioned.
-
-> [!NOTE]
-> When the cluster is created, it will have 0 nodes provisioned. The cluster *does not* incur costs until you submit a job. This cluster will scale down when it has been idle for 2,400 seconds (40 minutes). This will give you time to use it in a few tutorials if you wish without waiting for it to scale back up.
## Quick tour of the studio
Review the parts of the studio on the left-hand navigation bar:
* The **Assets** section of the studio helps you keep track of the assets you create as you run your jobs. If you have a new workspace, there's nothing in any of these sections yet.
-* You already used the **Manage** section of the studio to create your compute resources. This section also lets you create and manage data and external services you link to your workspace.
+* The **Manage** section of the studio lets you create and manage compute and external services you link to your workspace. It's also where you can create and manage a **Data labeling** project.
+
-### Workspace diagnostics
+## Learn from sample notebooks
+Use the sample notebooks available in studio to help you learn about how to train and deploy models. They're referenced in many of the other articles and tutorials.
+
+1. On the left navigation, select **Notebooks**.
+1. At the top, select **Samples**.
++
+* Use notebooks in the **SDK v2** folder for examples that show the current version of the SDK, v2.
+* These notebooks are read-only, and are updated periodically.
+* When you open a notebook, select the **Clone this notebook** button at the top to add your copy of the notebook and any associated files into your own files. A new folder with the notebook is created for you in the **Files** section.
+
+## Create a new notebook
+
+When you clone a notebook from **Samples**, a copy is added to your files and you can start running or modifying it. Many of the tutorials will mirror these sample notebooks.
+
+But you could also create a new, empty notebook, then copy/paste code from a tutorial into the notebook. To do so:
+
+1. Still in the **Notebooks** section, select **Files** to go back to your files,
+1. Select **+** to add files.
+1. Select **Create new file**.
+
+ :::image type="content" source="media/quickstart-create-resources/create-new-file.png" alt-text="Screenshot shows how to create a new file.":::
+
## Clean up resources
-If you plan to continue now to the next tutorial, skip to [Next steps](#next-steps).
+If you plan to continue now to other tutorials, skip to [Next steps](#next-steps).
### Stop compute instance
If you're not going to use it now, stop the compute instance:
## Next steps
-You now have an Azure Machine Learning workspace that contains:
--- A compute instance to use for your development environment.-- A compute cluster to use for submitting training runs.
+You now have an Azure Machine Learning workspace, which contains a compute instance to use for your development environment.
-Use these resources to learn more about Azure Machine Learning and train a model with Python scripts.
+Continue on to learn how to use the compute instance to run notebooks and scripts in the Azure Machine Learning cloud.
> [!div class="nextstepaction"]
-> [Quickstart: Run Jupyter notebook in Azure Machine Learning studio](quickstart-run-notebooks.md)
->
+> [Quickstart: Get to know Azure Machine Learning](tutorial-azure-ml-in-a-day.md)
+
+Use your compute instance with the following tutorials to train and deploy a model.
+
+|Tutorial |Description |
+|||
+| [Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md) | Store large data in the cloud and retrieve it from notebooks and scripts |
+| [Model development on a cloud workstation](tutorial-cloud-workstation.md) | Start prototyping and developing machine learning models |
+| [Train a model in Azure Machine Learning](tutorial-train-model.md) | Dive in to the details of training a model |
+| [Deploy a model as an online endpoint](tutorial-deploy-model.md) | Dive in to the details of deploying a model |
+| [Create production machine learning pipelines](tutorial-pipeline-python-sdk.md) | Split a complete machine learning task into a multistep workflow. |
machine-learning Quickstart Run Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-run-notebooks.md
- Title: "Quickstart: Run notebooks"-
-description: Learn to run Jupyter notebooks in studio, and find sample notebooks to learn more about Azure Machine Learning.
-------- Previously updated : 09/28/2022
-adobe-target: true
-#Customer intent: As a data scientist, I want to run notebooks and explore sample notebooks in Azure Machine Learning.
--
-# Quickstart: Run Jupyter notebooks in studio
-
-Get started with Azure Machine Learning by using Jupyter notebooks to learn more about the Python SDK.
-
-In this quickstart, you'll learn how to run notebooks on a *compute instance* in Azure Machine Learning studio. A compute instance is an online compute resource that has a development environment already installed and ready to go.
-
-You'll also learn where to find sample notebooks to help jump-start your path to training and deploying models with Azure Machine Learning.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Run the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md) to create a workspace and a compute instance.-
-## Create a new notebook
-
-Create a new notebook in studio.
-
-1. Sign into [Azure Machine Learning studio](https://ml.azure.com).
-1. Select your workspace, if it isn't already open.
-1. On the left, select **Notebooks**.
-1. Select **Create new file**.
-
- :::image type="content" source="media/quickstart-run-notebooks/create-new-file.png" alt-text="Screenshot: create a new notebook file.":::
-
-1. Name your new notebook **my-new-notebook.ipynb**.
--
-## Create a markdown cell
-
-1. On the upper right of each notebook cell is a toolbar of actions you can use for that cell. Select the **Convert to markdown cell** tool to change the cell to markdown.
-
- :::image type="content" source="media/quickstart-run-notebooks/convert-to-markdown.png" alt-text="Screenshot: Convert to markdown.":::
-
-1. Double-click on the cell to open it.
-1. Inside the cell, type:
-
- ```markdown
- # Testing a new notebook
- Use markdown cells to add nicely formatted content to the notebook.
- ```
-
-## Create a code cell
-
-1. Just below the cell, select **+ Code** to create a new code cell.
-1. Inside this cell, add:
-
- ```python
- print("Hello, world!")
- ```
-
-## Run the code
-
-1. If you stopped your compute instance at the end of the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md), start it again now:
-
- :::image type="content" source="media/quickstart-run-notebooks/start-compute.png" alt-text="Screenshot: Start a compute instance.":::
-
-1. Wait until the compute instance is "Running". When it is running, the **Compute instance** dot is green. You can also see the status after the compute instance name. You may have to select the arrow to see the full name.
-
- :::image type="content" source="media/quickstart-run-notebooks/compute-running.png" alt-text="Screenshot: Compute is running.":::
-
-1. You can run code cells either by using **Shift + Enter**, or by selecting the **Run cell** tool to the right of the cell. Use one of these methods to run the cell now.
-
- :::image type="content" source="media/quickstart-run-notebooks/run-cell.png" alt-text="Screenshot: run cell tool.":::
-
-1. The brackets to the left of the cell now have a number inside. The number represents the order in which cells were run. Since this is the first cell you've run, you'll see `[1]` next to the cell. You also see the output of the cell, `Hello, world!`.
-
-1. Run the cell again. You'll see the same output (since you didn't change the code), but now the brackets contain `[2]`. As your notebook gets larger, these numbers help you understand what code was run, and in what order.
-
-## Run a second code cell
-
-1. Add a second code cell:
-
- ```python
- two = 1 + 1
- print("One plus one is ",two)
- ```
-
-1. Run the new cell.
-1. Your notebook now looks like:
-
- :::image type="content" source="media/quickstart-run-notebooks/notebook.png" alt-text="Screenshot: Notebook contents.":::
-
-## See your variables
-
-Use the **Variable explorer** to see the variables that are defined in your session.
-
-1. Select the **"..."** in the notebook toolbar.
-1. Select **Variable explorer**.
-
- :::image type="content" source="media/quickstart-run-notebooks/variable-explorer.png" alt-text="Screenshot: Variable explorer tool.":::":::
-
- The explorer appears at the bottom. You currently have one variable, `two`, assigned.
-
-1. Add another code cell:
-
- ```python
- three = 1+two
- ```
-
-1. Run this cell to see the variable `three` appear in the variable explorer.
-
-## Learn from sample notebooks
-
-There are sample notebooks available in studio to help you learn more about Azure Machine Learning. To find these samples:
-
-1. Still in the **Notebooks** section, select **Samples** at the top.
-
- :::image type="content" source="media/quickstart-run-notebooks/samples.png" alt-text="Screenshot: Sample notebooks.":::
-
-1. The **SDK v1** folder can be used with the previous, v1 version of the SDK. If you're just starting, you won't need these samples.
-1. Use notebooks in the **SDK v2** folder for examples that show the current version of the SDK, v2.
-1. Select the notebook **SDK v2/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb**. You'll see a read-only version of the notebook.
-1. To get your own copy, you can select **Clone this notebook**. This action will also copy the rest of the folder's content for that notebook. No need to do that now, though, as you're going to instead clone the whole folder.
-
-## Clone tutorials folder
-
-You can also clone an entire folder. The **tutorials** folder is a good place to start learning more about how Azure Machine Learning works.
-
-1. Open the **SDK v2** folder.
-1. Select the **"..."** at the right of **tutorials** folder to get the menu, then select **Clone**.
-
- :::image type="content" source="media/quickstart-run-notebooks/clone-folder.png" alt-text="Screenshot: clone v2 tutorials folder.":::
-
-1. Your new folder is now displayed in the **Files** section.
-1. Run the notebooks in this folder to learn more about using the Python SDK v2 to train and deploy models.
-
-## Clean up resources
-
-If you plan to continue now to the next tutorial, skip to [Next steps](#next-steps).
-
-### Delete all resources
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md)
->
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
This article shows you how to access the repository from the following environme
## Option 1: Access on Azure Machine Learning compute instance (recommended)
-The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
+The easiest way to get started with the samples is to complete the [Create resources to get started](quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
To view example notebooks: 1. Sign in to [studio](https://ml.azure.com) and select your workspace if necessary.
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
You'll write code using the Python SDK in this tutorial and learn the following
* Python 3.6 or 3.7 are supported for this feature
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
+* Complete [Create resources to get started](quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook.
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
Title: "Tutorial: Azure Machine Learning in a day"
+ Title: "Quickstart: Get started with Azure Machine Learning"
description: Use Azure Machine Learning to train and deploy a model in a cloud-based Python Jupyter Notebook. -+ Previously updated : 02/02/2023 Last updated : 03/15/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial: Azure Machine Learning in a day
+# Quickstart: Get started with Azure Machine Learning
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-Learn how a data scientist uses Azure Machine Learning to train a model, then use the model for prediction. This tutorial will help you become familiar with the core concepts of Azure Machine Learning and their most common usage.
+This tutorial is an introduction to some of the most used features of the Azure Machine Learning service. In it, you will create, register and deploy a model. This tutorial will help you become familiar with the core concepts of Azure Machine Learning and their most common usage.
-You'll learn how to submit a *command job* to run your *training script* on a specified *compute resource*, configured with the *job environment* necessary to run the script.
+You'll learn how to run a training job on a scalable compute resource, then deploy it, and finally test the deployment.
-The *training script* handles the data preparation, then trains and registers a model. Once you have the model, you'll deploy it as an *endpoint*, then call the endpoint for inferencing.
+You'll create a training script to handle the data preparation, train and register a model. Once you train the model, you'll *deploy* it as an *endpoint*, then call the endpoint for *inferencing*.
The steps you'll take are: > [!div class="checklist"]
-> * Connect to your Azure Machine Learning workspace
-> * Create your compute resource and job environment
+> * Set up a handle to your Azure Machine Learning workspace
> * Create your training script
-> * Create and run your command job to run the training script on the compute resource, configured with the appropriate job environment
+> * Create a scalable compute resource, a compute cluster
+> * Create and run a command job that will run the training script on the compute cluster, configured with the appropriate job environment
> * View the output of your training script > * Deploy the newly-trained model as an endpoint > * Call the Azure Machine Learning endpoint for inferencing - ## Prerequisites
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
- * Create a workspace.
- * Create a cloud-based compute instance to use for your development environment.
-
-* Create a new notebook or copy our notebook.
- * Follow the [Quickstart: Run Juypter notebook in Azure Machine Learning studio](quickstart-run-notebooks.md) steps to create a new notebook.
- * Or use the steps in the quickstart to [clone the v2 tutorials folder](quickstart-run-notebooks.md#learn-from-sample-notebooks), then open the notebook from the **tutorials/azureml-in-a-day/azureml-in-a-day.ipynb** folder in your **File** section.
-
-## Run your notebook
-
-1. On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to use for running the notebook.
+1. [!INCLUDE [workspace](includes/prereq-workspace.md)]
-2. Make sure that the kernel, found on the top right, is `Python 3.10 - SDK v2`. If not, use the dropdown to select this kernel.
+1. [!INCLUDE [sign in](includes/prereq-sign-in.md)]
+1. [!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)]
+ * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)]
+ * Or, open **tutorials/get-started-notebooks/quickstart.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)]
-> [!Important]
-> The rest of this tutorial contains cells of the tutorial notebook. Copy/paste them into your new notebook, or switch to the notebook now if you cloned it.
->
-> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
+<!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/quickstart.ipynb -->
-## Connect to the workspace
-Before you dive in the code, you'll need to connect to your Azure Machine Learning workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
+## Create handle to workspace
-We're using `DefaultAzureCredential` to get access to workspace.
-`DefaultAzureCredential` is used to handle most Azure SDK authentication scenarios.
+Before we dive in the code, you need a way to reference your workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=credential)]
+You'll create `ml_client` for a handle to the workspace. You'll then use `ml_client` to manage resources and jobs.
In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values:
In the next cell, enter your Subscription ID, Resource Group name and Workspace
:::image type="content" source="media/tutorial-azure-ml-in-a-day/find-credentials.png" alt-text="Screenshot: find the credentials for your code in the upper right of the toolbar.":::
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=ml_client)]
-
-The result is a handler to the workspace that you'll use to manage other resources and jobs.
-
-> [!IMPORTANT]
-> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (in the notebook below, that will happen during compute creation).
-
-## Create a compute resource to run your job
-
-You'll need a compute resource for running a job. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-
-You'll provision a Linux compute cluster. See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/) .
-
-For this example, you only need a basic cluster, so you'll use a Standard_DS3_v2 model with 2 vCPU cores, 7-GB RAM and create an Azure Machine Learning Compute.
-
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=cpu_compute_target)]
-
-## Create a job environment
-
-To run your Azure Machine Learning job on your compute resource, you'll need an [environment](concept-environments.md). An environment lists the software runtime and libraries that you want installed on the compute where youΓÇÖll be training. It's similar to your Python environment on your local machine.
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
-Azure Machine Learning provides many curated or ready-made environments, which are useful for common training and inference scenarios. You can also create your own custom environments using a docker image, or a conda configuration.
+# authenticate
+credential = DefaultAzureCredential()
-In this example, you'll create a custom conda environment for your jobs, using a conda yaml file.
+# Get a handle to the workspace
+ml_client = MLClient(
+ credential=credential,
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>",
+ workspace_name="<AML_WORKSPACE_NAME>",
+)
+```
-First, create a directory to store the file in.
-
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=dependencies_dir)]
-
-Now, create the file in the dependencies directory. The cell below uses IPython magic to write the file into the directory you just created.
-
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=write_model)]
+> [!NOTE]
+> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (in this notebook, that will happen in the cell that creates the compute cluster).
-The specification contains some usual packages, that you'll use in your job (numpy, pip).
+## Create training script
-Reference this *yaml* file to create and register this custom environment in your workspace:
+Let's start by creating the training script - the *main.py* Python file.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=custom_env_name)]
+First create a source folder for the script:
-## What is a command job?
-You'll create an Azure Machine Learning *command job* to train a model for credit default prediction. The command job is used to run a *training script* in a specified environment on a specified compute resource. You've already created the environment and the compute resource. Next you'll create the training script.
+```python
+import os
-The *training script* handles the data preparation, training and registering of the trained model. In this tutorial, you'll create a Python training script.
+train_src_dir = "./src"
+os.makedirs(train_src_dir, exist_ok=True)
+```
-Command jobs can be run from CLI, Python SDK, or studio interface. In this tutorial, you'll use the Azure Machine Learning Python SDK v2 to create and run the command job.
+This script handles the preprocessing of the data, splitting it into test and train data. It then consumes this data to train a tree based model and return the output model.
-After running the training job, you'll deploy the model, then use it to produce a prediction.
+[MLFlow](how-to-log-mlflow-models.md) will be used to log the parameters and metrics during our pipeline run.
+The cell below uses IPython magic to write the training script into the directory you just created.
-## Create training script
-Let's start by creating the training script - the *main.py* Python file.
+```python
+%%writefile {train_src_dir}/main.py
+import os
+import argparse
+import pandas as pd
+import mlflow
+import mlflow.sklearn
+from sklearn.ensemble import GradientBoostingClassifier
+from sklearn.metrics import classification_report
+from sklearn.model_selection import train_test_split
+
+def main():
+ """Main function of the script."""
+
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--data", type=str, help="path to input data")
+ parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
+ parser.add_argument("--n_estimators", required=False, default=100, type=int)
+ parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
+ parser.add_argument("--registered_model_name", type=str, help="model name")
+ args = parser.parse_args()
+
+ # Start Logging
+ mlflow.start_run()
+
+ # enable autologging
+ mlflow.sklearn.autolog()
+
+ ###################
+ #<prepare the data>
+ ###################
+ print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
+
+ print("input data:", args.data)
+
+ credit_df = pd.read_csv(args.data, header=1, index_col=0)
+
+ mlflow.log_metric("num_samples", credit_df.shape[0])
+ mlflow.log_metric("num_features", credit_df.shape[1] - 1)
+
+ train_df, test_df = train_test_split(
+ credit_df,
+ test_size=args.test_train_ratio,
+ )
+ ####################
+ #</prepare the data>
+ ####################
+
+ ##################
+ #<train the model>
+ ##################
+ # Extracting the label column
+ y_train = train_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_train = train_df.values
+
+ # Extracting the label column
+ y_test = test_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_test = test_df.values
+
+ print(f"Training with data of shape {X_train.shape}")
+
+ clf = GradientBoostingClassifier(
+ n_estimators=args.n_estimators, learning_rate=args.learning_rate
+ )
+ clf.fit(X_train, y_train)
+
+ y_pred = clf.predict(X_test)
+
+ print(classification_report(y_test, y_pred))
+ ###################
+ #</train the model>
+ ###################
+
+ ##########################
+ #<save and register model>
+ ##########################
+ # Registering the model to the workspace
+ print("Registering the model via MLFlow")
+ mlflow.sklearn.log_model(
+ sk_model=clf,
+ registered_model_name=args.registered_model_name,
+ artifact_path=args.registered_model_name,
+ )
+
+ # Saving the model to a file
+ mlflow.sklearn.save_model(
+ sk_model=clf,
+ path=os.path.join(args.registered_model_name, "trained_model"),
+ )
+ ###########################
+ #</save and register model>
+ ###########################
+
+ # Stop Logging
+ mlflow.end_run()
+
+if __name__ == "__main__":
+ main()
+```
-First create a source folder for the script:
+As you can see in this script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=train_src_dir)]
+You might need to select **Refresh** to see the new folder and script in your **Files**.
-This script handles the preprocessing of the data, splitting it into test and train data. It then consumes this data to train a tree based model and return the output model.
-[MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.
+## Create a compute cluster, a scalable way to run a training job
-The cell below uses IPython magic to write the training script into the directory you just created.
+You already have a compute instance, which you're using to run the notebook. Now you'll add a second type of compute, a **compute cluster** that you'll use to run your training job. While a compute instance is a single node machine, a compute cluster can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=write_main)]
+You'll provision a Linux compute cluster. See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/) .
-As you can see in this script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
+For this example, you only need a basic cluster, so you'll use a Standard_DS3_v2 model with 2 vCPU cores, 7-GB RAM.
++
+```python
+from azure.ai.ml.entities import AmlCompute
+
+# Name assigned to the compute cluster
+cpu_compute_target = "cpu-cluster"
+
+try:
+ # let's see if the compute target already exists
+ cpu_cluster = ml_client.compute.get(cpu_compute_target)
+ print(
+ f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
+ )
+
+except Exception:
+ print("Creating a new cpu compute target...")
+
+ # Let's create the Azure Machine Learning compute object with the intended parameters
+ cpu_cluster = AmlCompute(
+ name=cpu_compute_target,
+ # Azure Machine Learning Compute is the on-demand VM service
+ type="amlcompute",
+ # VM Family
+ size="STANDARD_DS3_V2",
+ # Minimum running nodes when there is no job running
+ min_instances=0,
+ # Nodes in cluster
+ max_instances=4,
+ # How many seconds will the node running after the job termination
+ idle_time_before_scale_down=180,
+ # Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination
+ tier="Dedicated",
+ )
+ print(
+ f"AMLCompute with name {cpu_cluster.name} will be created, with compute size {cpu_cluster.size}"
+ )
+ # Now, we pass the object to MLClient's create_or_update method
+ cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster)
+```
## Configure the command
-Now that you have a script that can perform the desired tasks, you'll use the general purpose **command** that can run command line actions. This command line action can be directly calling system commands or by running a script.
+Now that you have a script that can perform the desired tasks, and a compute cluster to run the script, you'll use a general purpose **command** that can run command line actions. This command line action can directly call system commands or run a script.
Here, you'll create input variables to specify the input data, split ratio, learning rate and registered model name. The command script will:
-* Use the compute created earlier to run this command.
-* Use the environment created earlier - you can use the `@latest` notation to indicate the latest version of the environment when the command is run.
-* Configure some metadata like display name, experiment name etc. An *experiment* is a container for all the iterations you do on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure Machine Learning studio.
+* Use the compute cluster to run the command.
+* Use an *environment* that defines software and runtime libraries needed for the training script. Azure Machine Learning provides many curated or ready-made environments, which are useful for common training and inference scenarios. You'll use one of those environments here. In the [Train a model](tutorial-train-model.md) tutorial, you'll learn how to create a custom environment.
* Configure the command line action itself - `python main.py` in this case. The inputs/outputs are accessible in the command via the `${{ ... }}` notation.--
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=registered_model_name)]
+* In this sample, we access the data from a file on the internet.
++
+```python
+from azure.ai.ml import command
+from azure.ai.ml import Input
+
+registered_model_name = "credit_defaults_model"
+
+job = command(
+ inputs=dict(
+ data=Input(
+ type="uri_file",
+ path="https://azuremlexamples.blob.core.windows.net/datasets/credit_card/default_of_credit_card_clients.csv",
+ ),
+ test_train_ratio=0.2,
+ learning_rate=0.25,
+ registered_model_name=registered_model_name,
+ ),
+ code="./src/", # location of source code
+ command="python main.py --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}} --learning_rate ${{inputs.learning_rate}} --registered_model_name ${{inputs.registered_model_name}}",
+ environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest",
+ compute="cpu-cluster",
+ display_name="credit_default_prediction",
+)
+```
## Submit the job
-It's now time to submit the job to run in Azure Machine Learning. This time you'll use `create_or_update` on `ml_client.jobs`.
+It's now time to submit the job to run in Azure Machine Learning. This time you'll use `create_or_update` on `ml_client`.
+
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=create_job)]
+```python
+ml_client.create_or_update(job)
+```
## View job output and wait for job completion
-View the job in Azure Machine Learning studio by selecting the link in the output of the previous cell.
+View the job in Azure Machine Learning studio by selecting the link in the output of the previous cell.
The output of this job will look like this in the Azure Machine Learning studio. Explore the tabs for various details like metrics, outputs etc. Once completed, the job will register a model in your workspace as a result of training.
-![Screenshot that shows the job overview](media/tutorial-azure-ml-in-a-day/view-job.gif "Overview of the job.")
> [!IMPORTANT] > Wait until the status of the job is complete before returning to this notebook to continue. The job will take 2 to 3 minutes to run. It could take longer (up to 10 minutes) if the compute cluster has been scaled down to zero nodes and custom environment is still building.
The output of this job will look like this in the Azure Machine Learning studio.
Now deploy your machine learning model as a web service in the Azure cloud, an [`online endpoint`](concept-endpoints.md).
-To deploy a machine learning service, you usually need:
+To deploy a machine learning service, you'll use the model you registered.
-* The model assets (file, metadata) that you want to deploy. You've already registered these assets in your training job.
-* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns. With an MLFlow model, as in this tutorial, this script is automatically created for you. Samples of scoring scripts can be found [here](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/endpoints/online).
+## Create a new online endpoint
+Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
-## Create a new online endpoint
-Now that you have a registered model and an inference script, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
+```python
+import uuid
+# Creating a unique name for the endpoint
+online_endpoint_name = "credit-endpoint-" + str(uuid.uuid4())[:8]
+```
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=online_endpoint_name)]
+Create the endpoint:
++
+```python
+# Expect the endpoint creation to take a few minutes
+from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+)
+
+# create an online endpoint
+endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is an online endpoint",
+ auth_mode="key",
+ tags={
+ "training_dataset": "credit_defaults",
+ "model_type": "sklearn.GradientBoostingClassifier",
+ },
+)
+
+endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint).result()
+
+print(f"Endpoint {endpoint.name} provisioning state: {endpoint.provisioning_state}")
+```
> [!NOTE]
-> Expect the endpoint creation to take approximately 6 to 8 minutes.
+> Expect the endpoint creation to take a few minutes.
+
+Once the endpoint has been created, you can retrieve it as below:
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=endpoint)]
-Once you've created an endpoint, you can retrieve it as below:
+```python
+endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=retrieve_endpoint)]
+print(
+ f'Endpoint "{endpoint.name}" with provisioning state "{endpoint.provisioning_state}" is retrieved'
+)
+```
## Deploy the model to the endpoint Once the endpoint is created, deploy the model with the entry script. Each endpoint can have multiple deployments. Direct traffic to these deployments can be specified using rules. Here you'll create a single deployment that handles 100% of the incoming traffic. We have chosen a color name for the deployment, for example, *blue*, *green*, *red* deployments, which is arbitrary.
-You can check the **Models** page on the Azure Machine Learning studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use.
+You can check the **Models** page on Azure Machine Learning studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=latest_model_version)]
+```python
+# Let's pick the latest version of the model
+latest_model_version = max(
+ [int(m.version) for m in ml_client.models.list(name=registered_model_name)]
+)
+print(f'Latest model is version "{latest_model_version}" ')
+```
Deploy the latest version of the model.
-> [!NOTE]
-> Expect this deployment to take approximately 6 to 8 minutes.
+```python
+# picking the model to deploy. Here we use the latest version of our registered model
+model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=blue_deployment)]
+# Expect this deployment to take approximately 6 to 8 minutes.
+# create an online deployment.
+blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ instance_type="Standard_DS3_v2",
+ instance_count=1,
+)
+blue_deployment = ml_client.begin_create_or_update(blue_deployment).result()
+```
-### Test with a sample query
+> [!NOTE]
+> Expect this deployment to take approximately 6 to 8 minutes.
-Now that the model is deployed to the endpoint, you can run inference with it.
+When the deployment is done, you're ready to test it.
-Create a sample request file following the design expected in the run method in the score script.
+### Test with a sample query
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=deploy_dir)]
+Once the model is deployed to the endpoint, you can run inference with it.
+Create a sample request file following the design expected in the run method in the score script.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=write_sample)]
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=test)]
+```python
+deploy_dir = "./deploy"
+os.makedirs(deploy_dir, exist_ok=True)
+```
++
+```python
+%%writefile {deploy_dir}/sample-request.json
+{
+ "input_data": {
+ "columns": [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],
+ "index": [0, 1],
+ "data": [
+ [20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0],
+ [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 10, 9, 8]
+ ]
+ }
+}
+```
++
+```python
+# test the blue deployment with some sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ request_file="./deploy/sample-request.json",
+ deployment_name="blue",
+)
+```
## Clean up resources If you're not going to use the endpoint, delete it to stop using the resource. Make sure no other deployments are using an endpoint before you delete it. + > [!NOTE]
-> Expect this step to take approximately 6 to 8 minutes.
+> Expect the complete deletion to take approximately 20 minutes.
-[!notebook-python[](~/azureml-examples-main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb?name=delete_endpoint)]
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+```
-### Delete everything
+<!-- nbend -->
-Use these steps to delete your Azure Machine Learning workspace and all compute resources.
+### Stop compute instance
+
+If you're not going to use it now, stop the compute instance:
+
+1. In the studio, in the left navigation area, select **Compute**.
+1. In the top tabs, select **Compute instances**
+1. Select the compute instance in the list.
+1. On the top toolbar, select **Stop**.
+### Delete all resources
+ ## Next steps
-+ Convert this tutorial into a production ready [pipeline with reusable components](tutorial-pipeline-python-sdk.md).
-+ Learn about all of the [deployment options](how-to-deploy-online-endpoints.md) for Azure Machine Learning.
-+ Learn how to [authenticate to the deployed model](how-to-authenticate-online-endpoint.md).
+Now that you have an idea of what's involved in training and deploying a model, learn more about the process in these tutorials:
+
+|Tutorial |Description |
+|||
+| [Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md) | Store large data in the cloud and retrieve it from notebooks and scripts |
+| [Model development on a cloud workstation](tutorial-cloud-workstation.md) | Start prototyping and developing machine learning models |
+| [Train a model in Azure Machine Learning](tutorial-train-model.md) | Dive in to the details of training a model |
+| [Deploy a model as an online endpoint](tutorial-deploy-model.md) | Dive in to the details of deploying a model |
+| [Create production machine learning pipelines](tutorial-pipeline-python-sdk.md) | Split a complete machine learning task into a multistep workflow. |
machine-learning Tutorial Cloud Workstation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-cloud-workstation.md
+
+ Title: "Tutorial: Model development on a cloud workstation"
+
+description: Learn how to get started prototyping and developing machine learning models on an Azure Machine Learning cloud workstation.
+++++++ Last updated : 03/15/2023
+#Customer intent: As a data scientist, I want to know how to prototype and develop machine learning models on a cloud workstation.
++
+# Tutorial: Model development on a cloud workstation
+
+Learn how to develop a training script with a notebook on an Azure Machine Learning cloud workstation. This tutorial covers the basics you need to get started:
+
+> [!div class="checklist"]
+> * Set up and configuring the cloud workstation. Your cloud workstation is powered by an Azure Machine Learning compute instance, which is pre-configured with environments to support your various model development needs.
+> * Use cloud-based development environments.
+> * Use MLflow to track your model metrics, all from within a notebook.
+
+## Prerequisites
++
+## Start with Notebooks
+
+The Notebooks section in your workspace is a good place to start learning about Azure Machine Learning and its capabilities. Here you can connect to compute resources, work with a terminal, and edit and run Jupyter Notebooks and scripts.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+1. Select your workspace if it isn't already open.
+1. On the left navigation, select **Notebooks**.
+1. If you don't have a compute instance, you'll see **Create compute** in the middle of the screen. Select **Create compute** and fill out the form. You can use all the defaults. (If you already have a compute instance, you'll instead see **Terminal** in that spot. You'll use **Terminal** later in this tutorial.)
+
+ :::image type="content" source="media/tutorial-cloud-workstation/create-compute.png" alt-text="Screenshot shows how to create a compute instance.":::
+
+## Set up a new environment for prototyping (optional)
+
+In order for your script to run, you need to be working in an environment configured with the dependencies and libraries the code expects. This section helps you create an environment tailored to your code. To create the new Jupyter kernel your notebook connects to, you'll use a YAML file that defines the dependencies.
+
+* **Upload a file.**
+
+ Files you upload are stored in an Azure file share, and these files are mounted to each compute instance and shared within the workspace.
+
+ 1. Download this conda environment file, [*workstation_env.yml*](https://azuremlexampledata.blob.core.windows.net/datasets/workstation_env.yml) to your computer.
+ 1. Select **Add files**, then select **Upload files** to upload it to your workspace.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/upload-files.png" alt-text="Screenshot shows how to upload files to your workspace.":::
+
+ 1. Select **Browse and select file(s)**.
+ 1. Select **workstation_env.yml** file you downloaded.
+ 1. Select **Upload**.
+
+ You'll see the *workstation_env.yml* file under your username folder in the **Files** tab. Select this file to preview it, and see what dependencies it specifies.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/view-yml.png" alt-text="Screenshot shows the yml file that you uploaded.":::
++
+* **Create a kernel.**
+
+ Now use the Azure Machine Learning terminal to create a new Jupyter kernel, based on the *workstation_env.yml* file.
+
+ 1. Select **Terminal** to open a terminal window. You can also open the terminal from the left command bar:
+
+ :::image type="content" source="media/tutorial-cloud-workstation/open-terminal.png" alt-text="Screenshot shows open terminal tool in notebook toolbar.":::
+
+ 1. If the compute instance is stopped, select **Start compute** and wait until it's running.
+
+ :::image type="content" source="media/tutorial-azure-ml-in-a-day/start-compute.png" alt-text="Screenshot shows how to start compute if it's stopped." lightbox="media/tutorial-azure-ml-in-a-day/start-compute.png":::
+
+ 1. Once the compute is running, you see a welcome message in the terminal, and you can start typing commands.
+ 1. View your current conda environments. The active environment is marked with a *.
+
+ ```bash
+ conda env list
+ ```
+
+ 1. If you created a subfolder for this tutorial, `cd` to that folder now.
+
+ 1. Create the environment based on the conda file provided. It takes a few minutes to build this environment.
+
+ ```bash
+ conda env create -f workstation_env.yml
+
+ ```
+
+ 1. Activate the new environment.
+
+ ```bash
+ conda activate workstation_env
+ ```
+
+ 1. Validate the correct environment is active, again looking for the environment marked with a *.
+
+ ```bash
+ conda env list
+ ```
+
+ 1. Create a new Jupyter kernel based on your active environment.
+
+ ```bash
+ python -m ipykernel install --user --name workstation_env --display-name "Tutorial Workstation Env"
+ ```
+
+ 1. Close the terminal window.
+
+You now have a new kernel. Next you'll open a notebook and use this kernel.
+
+## Create a notebook
+
+1. Select **Add files**, and choose **Create new file**.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/create-new-file.png" alt-text="Screenshot: Create new file.":::
+
+1. Name your new notebook **develop-tutorial.ipynb** (or enter your preferred name).
+
+1. If the compute instance is stopped, select **Start compute** and wait until it's running.
+
+ :::image type="content" source="media/tutorial-azure-ml-in-a-day/start-compute.png" alt-text="Screenshot shows how to start compute if it's stopped." lightbox="media/tutorial-azure-ml-in-a-day/start-compute.png":::
+
+1. You'll see the notebook is connected to the default kernel in the top right. Switch to use the **Tutorial Workstation Env** kernel.
+
+## Develop a training script
+
+In this section, you develop a Python training script that predicts credit card default payments, using the prepared test and training datasets from the [UCI dataset](https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients).
+
+This code uses `sklearn` for training and MLflow for logging the metrics.
+
+1. Start with code that imports the packages and libraries you'll use in the training script.
+
+ [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=import)]
+
+1. Next, load and process the data for this experiment. In this tutorial, you read the data from a file on the internet.
+
+ [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=load)]
+
+1. Get the data ready for training:
+
+ [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=extract)]
+
+1. Add code to start autologging with `MLflow`, so that you can track the metrics and results. With the iterative nature of model development, `MLflow` helps you log model parameters and results. Refer back to those runs to compare and understand how your model performs. The logs also provide context for when you're ready to move from the development phase to the training phase of your workflows within Azure Machine Learning.
+
+ [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=mlflow)]
+
+1. Train a model.
+
+ [!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=gbt)]
+
+## Iterate
+
+Now that you have model results, you may want to change something and try again. For example, try a different classifier technique:
+
+[!notebook-python[] (~/azureml-examples-main/tutorials/get-started-notebooks/cloud-workstation.ipynb?name=ada)]
+
+## Examine results
+
+Now that you've tried two different models, use the results tracked by `MLFfow` to decide which model is better. You can reference metrics like accuracy, or other indicators that matter most for your scenarios. You can dive into these results in more detail by looking at the jobs created by `MLflow`.
+
+1. On the left navigation, select **Jobs**.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/jobs.png" alt-text="Screenshot shows how to select Jobs in the navigation.":::
+
+1. Select the link for **Develop on cloud tutorial**.
+1. There are two different jobs shown, one for each of the models you tried. These names are autogenerated. As you hover over a name, use the pencil tool next to the name if you want to rename it.
+1. Select the link for the first job. The name appears at the top. You can also rename it here with the pencil tool.
+1. The page shows details of the job, such as properties, outputs, tags, and parameters. Under **Tags**, you'll see the estimator_name, which describes the type of model.
+
+1. Select the **Metrics** tab to view the metrics that were logged by `MLflow`. (Expect your results to differ, as you have a different training set.)
+
+ :::image type="content" source="media/tutorial-cloud-workstation/metrics.png" alt-text="Screenshot shows metrics for a job.":::
+
+1. Select the **Images** tab to view the images generated by `MLflow`.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/images.png" alt-text="Screenshot shows images for a job.":::
+
+1. Go back and review the metrics and images for the other model.
+
+## Create a Python script
+
+Now create a Python script from your notebook for model training.
+
+1. On the notebook toolbar, select the menu.
+1. Select **Export as> Python**.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/export-python-file.png" alt-text="Screenshot shows exporting a Python file from the notebook.":::
+
+1. Name the file **train.py**.
+1. Look through this file and delete the code you don't want in the training script. For example, keep the code for the model you wish to use, and delete code for the model you don't want.
+ * Make sure you keep the code that starts autologging (`mlflow.sklearn.autolog()`).
+ * You may wish to delete the autogenerated comments and add in more of your own comments.
+ * When you run the Python script interactively (in a terminal or notebook), you can keep the line that defines the experiment name (`mlflow.set_experiment("Develop on cloud tutorial")`). Or even give it a different name to see it as a different entry in the **Jobs** section. But when you prepare the script for a training job, that line won't work and should be omitted - the job definition includes the experiment name.
+ * When you train a single model, the lines to start and end a run (`mlflow.start_run()` and `mlflow.end_run()`) are also not necessary (they'll have no effect), but can be left in if you wish.
+
+1. When you're finished with your edits, save the file.
+
+You now have a Python script to use for training your preferred model.
+
+## Run the Python script
+
+For now, you're running this code on your compute instance, which is your Azure Machine Learning development environment. [Tutorial: Train a model](tutorial-train-model.md) shows you how to run a training script in a more scalable way on more powerful compute resources.
+
+1. On the left, select **Open terminal** to open a terminal window.
+
+ :::image type="content" source="media/tutorial-cloud-workstation/open-terminal.png" alt-text="Screenshot shows how to open a terminal window.":::
+
+1. View your current conda environments. The active environment is marked with a *.
+
+ ```bash
+ conda env list
+ ```
+
+1. Activate your kernel:
+
+ ```bash
+ conda activate workstation_env
+ ```
+
+1. If you created a subfolder for this tutorial, `cd` to that folder now.
+1. Run your training script.
+
+ ```bash
+ python train.py
+ ```
+
+## Examine script results
+
+Go back to **Jobs** to see the results of your training script. Keep in mind that the training data changes with each split, so the results differ between runs as well.
+
+## Clean up resources
+
+If you plan to continue now to other tutorials, skip to [Next steps](#next-steps).
+
+### Stop compute instance
+
+If you're not going to use it now, stop the compute instance:
+
+1. In the studio, in the left navigation area, select **Compute**.
+1. In the top tabs, select **Compute instances**
+1. Select the compute instance in the list.
+1. On the top toolbar, select **Stop**.
+
+### Delete all resources
++
+## Next steps
+
+Learn more about:
+
+* [From artifacts to models in MLflow](concept-mlflow-models.md)
+* [Using Git with Azure Machine Learning](concept-train-model-git-integration.md)
+* [Running Jupyter notebooks in your workspace](how-to-run-jupyter-notebooks.md)
+* [Working with a compute instance terminal in your workspace](how-to-access-terminal.md)
+* [Manage notebook and terminal sessions](how-to-manage-compute-sessions.md)
+
+This tutorial showed you the early steps of creating a model, prototyping on the same machine where the code resides. For your production training, learn how to use that training script on more powerful remote compute resources:
+
+> [!div class="nextstepaction"]
+> [Train a model](tutorial-train-model.md)
+>
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
Title: Use a template to create a secure workspace
+ Title: "Use a template to create a secure workspace"
description: Use a template to create an Azure Machine Learning workspace and required Azure services inside a secure virtual network.
Last updated 12/02/2021
monikerRange: 'azureml-api-2 || azureml-api-1'
-# How to create a secure workspace by using template
+# Tutorial: How to create a secure workspace by using template
Templates provide a convenient way to create reproducible service deployments. The template defines what will be created, with some information provided by you when you use the template. For example, specifying a unique name for the Azure Machine Learning workspace.
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
monikerRange: 'azureml-api-2 || azureml-api-1'
-# How to create a secure workspace
+# Tutorial: How to create a secure workspace
In this article, learn how to create and connect to a secure Azure Machine Learning workspace. A secure workspace uses Azure Virtual Network to create a security boundary around resources used by Azure Machine Learning.
machine-learning Tutorial Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md
+
+ Title: "Tutorial: Deploy a model"
+
+description: This tutorial covers how to deploy a model to production using Azure Machine Learning Python SDK v2.
+++++++ Last updated : 03/15/2023+
+#Customer intent: This tutorial is intended to show users what is needed for deployment and present a high-level overview of how Azure Machine Learning handles deployment. Deployment isn't typically done by a data scientist, so the tutorial won't use Azure CLI examples. We will link to existing articles that use Azure CLI as needed. The code in the tutorial will use SDK v2. The tutorial will continue where the "Create reusable pipelines" tutorial stops.
++
+# Deploy a model as an online endpoint
+
+Learn to deploy a model to an online endpoint, using Azure Machine Learning Python SDK v2.
+
+In this tutorial, we use a model trained to predict the likelihood of defaulting on a credit card payment. The goal is to deploy this model and show its use.
+
+The steps you'll take are:
+
+> [!div class="checklist"]
+> * Register your model
+> * Create an endpoint and a first deployment
+> * Deploy a trial run
+> * Manually send test data to the deployment
+> * Get details of the deployment
+> * Create a second deployment
+> * Manually scale the second deployment
+> * Update allocation of production traffic between both deployments
+> * Get details of the second deployment
+> * Roll out the new deployment and delete the first one
+
+## Prerequisites
+
+1. [!INCLUDE [workspace](includes/prereq-workspace.md)]
+
+1. [!INCLUDE [sign in](includes/prereq-sign-in.md)]
+
+1. [!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)]
+ * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)]
+ * Or, open **tutorials/get-started-notebooks/deploy-model.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)]
+
+1. View your VM quota and ensure you have enough quota available to create online deployments. In this tutorial, you will need at least 8 cores of `STANDARD_DS3_v2` and 12 cores of `STANDARD_F4s_v2`. To view your VM quota usage and request quota increases, see [Manage resource quotas](how-to-manage-quotas.md#view-your-usage-and-quotas-in-the-azure-portal).
++
+<!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/deploy-model.ipynb -->
++
+## Create handle to workspace
+
+Before we dive in the code, you need a way to reference your workspace. You'll create `ml_client` for a handle to the workspace. You'll then use `ml_client` to manage resources and jobs.
+
+In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values:
+
+1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+1. Copy the value for workspace, resource group and subscription ID into the code.
+1. You'll need to copy one value, close the area and paste, then come back for the next one.
++
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+# authenticate
+credential = DefaultAzureCredential()
+
+# Get a handle to the workspace
+ml_client = MLClient(
+ credential=credential,
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>",
+ workspace_name="<AML_WORKSPACE_NAME>",
+)
+```
+
+> [!NOTE]
+> Creating `MLClient` will not connect to the workspace. The client initialization is lazy and will wait for the first time it needs to make a call (this will happen in the next code cell).
++
+## Register the model
+
+If you already completed the earlier training tutorial, [Train a model](tutorial-train-model.md), you've registered an MLflow model as part of the training script and can skip to the next section.
+
+If you didn't complete the training tutorial, you'll need to register the model. Registering your model before deployment is a recommended best practice.
+
+In this example, we specify the `path` (where to upload files from) inline. If you [cloned the tutorials folder](quickstart-create-resources.md#learn-from-sample-notebooks), then run the following code as-is. Otherwise, [download the files and metadata for the model to deploy](https://azuremlexampledata.blob.core.windows.net/datasets/credit_defaults_model.zip). Update the path to the location on your local computer where you've unzipped the model's files.
+
+The SDK automatically uploads the files and registers the model.
+
+For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the SDK](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-sdk).
++
+```python
+# Import the necessary libraries
+from azure.ai.ml.entities import Model
+from azure.ai.ml.constants import AssetTypes
+
+# Provide the model details, including the
+# path to the model files, if you've stored them locally.
+mlflow_model = Model(
+ path="./deploy/credit_defaults_model/",
+ type=AssetTypes.MLFLOW_MODEL,
+ name="credit_defaults_model",
+ description="MLflow Model created from local files.",
+)
+
+# Register the model
+ml_client.models.create_or_update(mlflow_model)
+```
+
+## Confirm that the model is registered
+
+You can check the **Models** page in [Azure Machine Learning studio](https://ml.azure.com/) to identify the latest version of your registered model.
++
+Alternatively, the code below will retrieve the latest version number for you to use.
++
+```python
+registered_model_name = "credit_defaults_model"
+
+# Let's pick the latest version of the model
+latest_model_version = max(
+ [int(m.version) for m in ml_client.models.list(name=registered_model_name)]
+)
+
+print(latest_model_version)
+```
+
+Now that you have a registered model, you can create an endpoint and deployment. The next section will briefly cover some key details about these topics.
+
+## Endpoints and deployments
+
+After you train a machine learning model, you need to deploy it so that others can use it for inferencing. For this purpose, Azure Machine Learning allows you to create **endpoints** and add **deployments** to them.
+
+An **endpoint**, in this context, is an HTTPS path that provides an interface for clients to send requests (input data) to a trained model and receive the inferencing (scoring) results back from the model. An endpoint provides:
+
+- Authentication using "key or token" based auth
+- [TLS(SSL)](https://simple.wikipedia.org/wiki/Transport_Layer_Security) termination
+- A stable scoring URI (endpoint-name.region.inference.ml.azure.com)
++
+A **deployment** is a set of resources required for hosting the model that does the actual inferencing.
+
+A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
+
+Azure Machine Learning allows you to implement [online endpoints](concept-endpoints.md#what-are-online-endpoints) for real-time inferencing on client data, and [batch endpoints](concept-endpoints.md#what-are-batch-endpoints) for inferencing on large volumes of data over a period of time.
+
+In this tutorial, we'll walk you through the steps of implementing a _managed online endpoint_. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way that frees you from the overhead of setting up and managing the underlying deployment infrastructure.
+
+## Create an online endpoint
+
+Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on the endpoint naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
++
+```python
+import uuid
+
+# Create a unique name for the endpoint
+online_endpoint_name = "credit-endpoint-" + str(uuid.uuid4())[:8]
+```
+
+First, we'll define the endpoint, using the `ManagedOnlineEndpoint` class.
+++
+> [!TIP]
+> * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
+>
+> * Optionally, you can add a description and tags to your endpoint.
++
+```python
+from azure.ai.ml.entities import ManagedOnlineEndpoint
+
+# define an online endpoint
+endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is an online endpoint",
+ auth_mode="key",
+ tags={
+ "training_dataset": "credit_defaults",
+ },
+)
+```
+
+Using the `MLClient` created earlier, we'll now create the endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+> [!NOTE]
+> Expect the endpoint creation to take approximately 2 minutes.
++
+```python
+# create the online endpoint
+# expect the endpoint to take approximately 2 minutes.
+
+endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint).result()
+```
+
+Once you've created the endpoint, you can retrieve it as follows:
++
+```python
+endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
+
+print(
+ f'Endpoint "{endpoint.name}" with provisioning state "{endpoint.provisioning_state}" is retrieved'
+)
+```
+
+## Understanding online deployments
+
+The key aspects of a deployment include:
+
+- `name` - Name of the deployment.
+- `endpoint_name` - Name of the endpoint that will contain the deployment.
+- `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+- `environment` - The environment to use for the deployment (or to run the model). This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. The environment can be a Docker image with Conda dependencies or a Dockerfile.
+- `code_configuration` - the configuration for the source code and scoring script.
+ - `path`- Path to the source code directory for scoring the model.
+ - `scoring_script` - Relative path to the scoring file in the source code directory. This script executes the model on a given input request. For an example of a scoring script, see [Understand the scoring script](how-to-deploy-online-endpoints.md#understand-the-scoring-script) in the "Deploy an ML model with an online endpoint" article.
+- `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+- `instance_count` - The number of instances to use for the deployment.
+
+### Deployment using an MLflow model
+
+Azure Machine Learning supports no-code deployment of a model created and logged with MLflow. This means that you don't have to provide a scoring script or an environment during model deployment, as the scoring script and environment are automatically generated when training an MLflow model. If you were using a custom model, though, you'd have to specify the environment and scoring script during deployment.
+
+> [!IMPORTANT]
+> If you typically deploy models using scoring scripts and custom environments and want to achieve the same functionality using MLflow models, we recommend reading [Using MLflow models for no-code deployment](how-to-deploy-mlflow-models.md).
+
+## Deploy the model to the endpoint
+
+You'll begin by creating a single deployment that handles 100% of the incoming traffic. We've chosen an arbitrary color name (*blue*) for the deployment. To create the deployment for our endpoint, we'll use the `ManagedOnlineDeployment` class.
+
+> [!NOTE]
+> No need to specify an environment or scoring script as the model to deploy is an MLflow model.
++
+```python
+from azure.ai.ml.entities import ManagedOnlineDeployment
+
+# Choose the latest version of our registered model for deployment
+model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
+
+# define an online deployment
+blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ instance_type="Standard_DS3_v2",
+ instance_count=1,
+)
+```
+
+Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
++
+```python
+# create the online deployment
+blue_deployment = ml_client.online_deployments.begin_create_or_update(
+ blue_deployment
+).result()
+
+# blue deployment takes 100% traffic
+# expect the deployment to take approximately 8 to 10 minutes.
+endpoint.traffic = {"blue": 100}
+ml_client.online_endpoints.begin_create_or_update(endpoint).result()
+```
+
+## Check the status of the endpoint
+You can check the status of the endpoint to see whether the model was deployed without error:
++
+```python
+# return an object that contains metadata for the endpoint
+endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
+
+# print a selection of the endpoint's metadata
+print(
+ f"Name: {endpoint.name}\nStatus: {endpoint.provisioning_state}\nDescription: {endpoint.description}"
+)
+```
++
+```python
+# existing traffic details
+print(endpoint.traffic)
+
+# Get the scoring URI
+print(endpoint.scoring_uri)
+```
+
+## Test the endpoint with sample data
+
+Now that the model is deployed to the endpoint, you can run inference with it. Let's create a sample request file following the design expected in the run method in the scoring script.
++
+```python
+import os
+
+# Create a directory to store the sample request file.
+deploy_dir = "./deploy"
+os.makedirs(deploy_dir, exist_ok=True)
+```
+
+Now, create the file in the deploy directory. The cell below uses IPython magic to write the file into the directory you just created.
++
+```python
+%%writefile {deploy_dir}/sample-request.json
+{
+ "input_data": {
+ "columns": [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],
+ "index": [0, 1],
+ "data": [
+ [20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0],
+ [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 10, 9, 8]
+ ]
+ }
+}
+```
+
+Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+
+* `endpoint_name` - Name of the endpoint
+* `request_file` - File with request data
+* `deployment_name` - Name of the specific deployment to test in an endpoint
+
+We'll test the blue deployment with the sample data.
++
+```python
+# test the blue deployment with the sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="blue",
+ request_file="./deploy/sample-request.json",
+)
+```
+
+## Get logs of the deployment
+Check the logs to see whether the endpoint/deployment were invoked successfully
+If you face errors, see [Troubleshooting online endpoints deployment](how-to-troubleshoot-online-endpoints.md).
++
+```python
+logs = ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=online_endpoint_name, lines=50
+)
+print(logs)
+```
+
+## Create a second deployment
+Deploy the model as a second deployment called `green`. In practice, you can create several deployments and compare their performance. These deployments could use a different version of the same model, a completely different model, or a more powerful compute instance. In our example, you'll deploy the same model version using a more powerful compute instance that could potentially improve performance.
++
+```python
+# picking the model to deploy. Here we use the latest version of our registered model
+model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
+
+# define an online deployment using a more powerful instance type
+green_deployment = ManagedOnlineDeployment(
+ name="green",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ instance_type="Standard_F4s_v2",
+ instance_count=1,
+)
+
+# create the online deployment
+# expect the deployment to take approximately 8 to 10 minutes
+green_deployment = ml_client.online_deployments.begin_create_or_update(
+ green_deployment
+).result()
+```
+
+## Scale deployment to handle more traffic
+
+Using the `MLClient` created earlier, we'll get a handle to the `green` deployment. The deployment can be scaled by increasing or decreasing the `instance_count`.
+
+In the following code, you'll increase the VM instance manually. However, note that it is also possible to autoscale online endpoints. Autoscale automatically runs the right amount of resources to handle the load on your application. Managed online endpoints support autoscaling through integration with the Azure monitor autoscale feature. To configure autoscaling, see [autoscale online endpoints](how-to-autoscale-endpoints.md).
++
+```python
+# update definition of the deployment
+green_deployment.instance_count = 2
+
+# update the deployment
+# expect the deployment to take approximately 8 to 10 minutes
+ml_client.online_deployments.begin_create_or_update(green_deployment).result()
+```
+
+## Update traffic allocation for deployments
+You can split production traffic between deployments. You may first want to test the `green` deployment with sample data, just like you did for the `blue` deployment. Once you've tested your green deployment, allocate a small percentage of traffic to it.
++
+```python
+endpoint.traffic = {"blue": 80, "green": 20}
+ml_client.online_endpoints.begin_create_or_update(endpoint).result()
+```
+
+You can test traffic allocation by invoking the endpoint several times:
++
+```python
+# You can invoke the endpoint several times
+for i in range(30):
+ ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ request_file="./deploy/sample-request.json",
+ )
+```
+
+Show logs from the `green` deployment to check that there were incoming requests and the model was scored successfully.
++
+```python
+logs = ml_client.online_deployments.get_logs(
+ name="green", endpoint_name=online_endpoint_name, lines=50
+)
+print(logs)
+```
+
+## View metrics using Azure Monitor
+You can view various metrics (request numbers, request latency, network bytes, CPU/GPU/Disk/Memory utilization, and more) for an online endpoint and its deployments by following links from the endpoint's **Details** page in the studio. Following these links will take you to the exact metrics page in the Azure portal for the endpoint or deployment.
++
+If you open the metrics for the online endpoint, you can set up the page to see metrics such as the average request latency as shown in the following figure.
++
+For more information on how to view online endpoint metrics, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#metrics).
+
+## Send all traffic to the new deployment
+Once you're fully satisfied with your `green` deployment, switch all traffic to it.
++
+```python
+endpoint.traffic = {"blue": 0, "green": 100}
+ml_client.begin_create_or_update(endpoint).result()
+```
+
+## Delete the old deployment
+Remove the old (blue) deployment:
++
+```python
+ml_client.online_deployments.begin_delete(
+ name="blue", endpoint_name=online_endpoint_name
+).result()
+```
+
+## Clean up resources
+
+If you aren't going use the endpoint and deployment after completing this tutorial, you should delete them.
+
+> [!NOTE]
+> Expect the complete deletion to take approximately 20 minutes.
++
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name).result()
+```
+
+<!-- nbend -->
+++
+### Delete everything
+
+Use these steps to delete your Azure Machine Learning workspace and all compute resources.
++
+## Next Steps
+
+- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md).
+- [Test the deployment with mirrored traffic (preview)](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview)
+- [Monitor online endpoints](how-to-monitor-online-endpoints.md)
+- [Autoscale an online endpoint](how-to-autoscale-endpoints.md)
+- [Customize MLflow model deployments with scoring script](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning Tutorial Explore Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-explore-data.md
+
+ Title: "Tutorial: Upload, access and explore your data"
+
+description: Upload data to cloud storage, create an Azure Machine Learning data asset, create new versions for data assets, use the data for interactive development
+++++++ Last updated : 03/15/2023
+#Customer intent: As a data scientist, I want to know how to prototype and develop machine learning models on a cloud workstation.
++
+# Tutorial: Upload, access and explore your data in Azure Machine Learning
++
+In this tutorial you learn how to:
+
+> [!div class="checklist"]
+> * Upload your data to cloud storage
+> * Create an Azure Machine Learning data asset
+> * Access your data in a notebook for interactive development
+> * Create new versions of data assets
+
+The start of a machine learning project typically involves exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and the building of Machine Learning model prototypes to validate hypotheses. This _prototyping_ project phase is highly interactive. It lends itself to development in an IDE or a Jupyter notebook, with a _Python interactive console_. This tutorial describes these ideas.
+
+## Prerequisites
+
+1. [!INCLUDE [workspace](includes/prereq-workspace.md)]
+
+1. [!INCLUDE [sign in](includes/prereq-sign-in.md)]
+
+1. [!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)]
+ * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)]
+ * Or, open **tutorials/get-started-notebooks/explore-data.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)]
++
+<!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/explore-data.ipynb -->
++
+## Download the data used in this tutorial
+
+For data ingestion, the Azure Data Explorer handles raw data in [these formats](/azure/data-explorer/ingestion-supported-formats). This tutorial uses this [CSV-format credit card client data sample](https://azuremlexamples.blob.core.windows.net/datasets/credit_card/default_of_credit_card_clients.csv). We see the steps proceed in an Azure Machine Learning resource. In that resource, we'll create a local folder with the suggested name of **data** directly under the folder where this notebook is located.
+
+> [!NOTE]
+> This tutorial depends on data placed in an Azure Machine Learning resource folder location. For this tutorial, 'local' means a folder location in that Azure Machine Learning resource.
+
+1. Select **Open terminal** below the three dots, as shown in this image:
+
+ :::image type="content" source="media/tutorial-cloud-workstation/open-terminal.png" alt-text="Screenshot shows open terminal tool in notebook toolbar.":::
+
+1. The terminal window opens in a new tab.
+1. Make sure you `cd` to the same folder where this notebook is located. For example, if the notebook is in a folder named **get-started-notebooks**:
+
+ ```
+ cd get-started-notebooks # modify this to the path where your notebook is located
+ ```
+
+1. Enter these commands in the terminal window to copy the data to your compute instance:
+
+ ```
+ mkdir data
+ cd data # the sub-folder where you'll store the data
+ wget https://azuremlexamples.blob.core.windows.net/datasets/credit_card/default_of_credit_card_clients.csv
+ ```
+1. You can now close the terminal window.
++
+[Learn more about this data on the UCI Machine Learning Repository.](https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients)
+
+## Create handle to workspace
+
+Before we dive in the code, you need a way to reference your workspace. You'll create `ml_client` for a handle to the workspace. You'll then use `ml_client` to manage resources and jobs.
+
+In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values:
+
+1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+1. Copy the value for workspace, resource group and subscription ID into the code.
+1. You'll need to copy one value, close the area and paste, then come back for the next one.
++
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+# authenticate
+credential = DefaultAzureCredential()
+
+# Get a handle to the workspace
+ml_client = MLClient(
+ credential=credential,
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>",
+ workspace_name="<AML_WORKSPACE_NAME>",
+)
+```
+
+> [!NOTE]
+> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (this will happen in the next code cell).
++
+## Upload data to cloud storage
+
+Azure Machine Learning uses Uniform Resource Identifiers (URIs), which point to storage locations in the cloud. A URI makes it easy to access data in notebooks and jobs. Data URI formats look similar to the web URLs that you use in your web browser to access web pages. For example:
+
+* Access data from public https server: `https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>`
+* Access data from Azure Data Lake Gen 2: `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>`
+
+An Azure Machine Learning data asset is similar to web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
+
+Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, and local files.
+
+> [!TIP]
+> For smaller-size data uploads, Azure Machine Learning data asset creation works well for data uploads from local machine resources to cloud storage. This approach avoids the need for extra tools or utilities. However, a larger-size data upload might require a dedicated tool or utility - for example, **azcopy**. The azcopy command-line tool moves data to and from Azure Storage. Learn more about [azcopy](../storage/common/storage-use-azcopy-v10.md).
+
+The next notebook cell creates the data asset. The code sample uploads the raw data file to the designated cloud storage resource.
+
+Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. In this code, we're using time to generate a unique version each time the cell is run.
+
+You can also omit the **version** parameter, and a version number is generated for you, starting with 1 and then incrementing from there. In this tutorial, we want to refer to specific version numbers, so we create a version number instead.
++
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+import time
+
+# update the 'my_path' variable to match the location of where you downloaded the data on your
+# local filesystem
+
+my_path = "./data/default_of_credit_card_clients.csv"
+# set the version number of the data asset to the current UTC time
+v1 = time.strftime("%Y.%m.%d.%H%M%S", time.gmtime())
+
+my_data = Data(
+ name="credit-card",
+ version=v1,
+ description="Credit card data",
+ path=my_path,
+ type=AssetTypes.URI_FILE,
+)
+
+# create data asset
+ml_client.data.create_or_update(my_data)
+
+print(f"Data asset created. Name: {my_data.name}, version: {my_data.version}")
+```
+
+You can see the uploaded data by selecting **Data** on the left. You'll see the data is uploaded and a data asset is created:
++
+This data is named **credit-card**, and in the **Data assets** tab, we can see it in the **Name** column. This data uploaded to your workspace's default datastore named **workspaceblobstore**, seen in the **Data source** column.
+
+An Azure Machine Learning datastore is a *reference* to an *existing* storage account on Azure. A datastore offers these benefits:
+
+1. A common and easy-to-use API, to interact with different storage types (Blob/Files/Azure Data Lake Storage) and authentication methods.
+1. An easier way to discover useful datastores, when working as a team.
+1. In your scripts, a way to hide connection information for credential-based data access (service principal/SAS/key).
++
+## Access your data in a notebook
+
+Pandas directly support URIs - this example shows how to read a CSV file from an Azure Machine Learning Datastore:
+
+```
+import pandas as pd
+
+df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+```
+
+However, as mentioned previously, it can become hard to remember these URIs. Additionally, you must manually substitute all **<_substring_>** values in the **pd.read_csv** command with the real values for your resources.
+
+You'll want to create data assets for frequently accessed data. Here's an easier way to access the CSV file in Pandas:
+
+> [!IMPORTANT]
+> In a notebook cell, execute this code to install the `azureml-fsspec` Python library in your Jupyter kernel:
++
+```python
+%pip install -U azureml-fsspec
+```
++
+```python
+import pandas as pd
+
+# get a handle of the data asset and print the URI
+data_asset = ml_client.data.get(name="credit-card", version=v1)
+print(f"Data asset URI: {data_asset.path}")
+
+# read into pandas - note that you will see 2 headers in your data frame - that is ok, for now
+
+df = pd.read_csv(data_asset.path)
+df.head()
+```
+
+Read [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md) to learn more about data access in a notebook.
+
+## Create a new version of the data asset
+
+You might have noticed that the data needs a little light cleaning, to make it fit to train a machine learning model. It has:
+
+* two headers
+* a client ID column; we wouldn't use this feature in Machine Learning
+* spaces in the response variable name
+
+Also, compared to the CSV format, the Parquet file format becomes a better way to store this data. Parquet offers compression, and it maintains schema. Therefore, to clean the data and store it in Parquet, use:
++
+```python
+# read in data again, this time using the 2nd row as the header
+df = pd.read_csv(data_asset.path, header=1)
+# rename column
+df.rename(columns={"default payment next month": "default"}, inplace=True)
+# remove ID column
+df.drop("ID", axis=1, inplace=True)
+
+# write file to filesystem
+df.to_parquet("./data/cleaned-credit-card.parquet")
+```
+
+This table shows the structure of the data in the original **default_of_credit_card_clients.csv** file .CSV file downloaded in an earlier step. The uploaded data contains 23 explanatory variables and 1 response variable, as shown here:
+
+|Column Name(s) | Variable Type |Description |
+||||
+|X1 | Explanatory | Amount of the given credit (NT dollar): it includes both the individual consumer credit and their family (supplementary) credit. |
+|X2 | Explanatory | Gender (1 = male; 2 = female). |
+|X3 | Explanatory | Education (1 = graduate school; 2 = university; 3 = high school; 4 = others). |
+|X4 | Explanatory | Marital status (1 = married; 2 = single; 3 = others). |
+|X5 | Explanatory | Age (years). |
+|X6-X11 | Explanatory | History of past payment. We tracked the past monthly payment records (from April to September 2005). -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; . . .; 8 = payment delay for eight months; 9 = payment delay for nine months and above. |
+|X12-17 | Explanatory | Amount of bill statement (NT dollar) from April to September 2005. |
+|X18-23 | Explanatory | Amount of previous payment (NT dollar) from April to September 2005. |
+|Y | Response | Default payment (Yes = 1, No = 0) |
+
+Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage):
+
+> [!NOTE]
+>
+> This Python code cell sets **name** and **version** values for the data asset it creates. As a result, the code in this cell will fail if executed more than once, without a change to these values. Fixed **name** and **version** values offer a way to pass values that work for specific situations, without concern for auto-generated or randomly-generated values.
+++
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+import time
+
+# Next, create a new *version* of the data asset (the data is automatically uploaded to cloud storage):
+v2 = v1 + "_cleaned"
+my_path = "./data/cleaned-credit-card.parquet"
+
+# Define the data asset, and use tags to make it clear the asset can be used in training
+
+my_data = Data(
+ name="credit-card",
+ version=v2,
+ description="Default of credit card clients data.",
+ tags={"training_data": "true", "format": "parquet"},
+ path=my_path,
+ type=AssetTypes.URI_FILE,
+)
+
+## create the data asset
+
+my_data = ml_client.data.create_or_update(my_data)
+
+print(f"Data asset created. Name: {my_data.name}, version: {my_data.version}")
+```
+
+The cleaned parquet file is the latest version data source. This code shows the CSV version result set first, then the Parquet version:
++
+```python
+import pandas as pd
+
+# get a handle of the data asset and print the URI
+data_asset_v1 = ml_client.data.get(name="credit-card", version=v1)
+data_asset_v2 = ml_client.data.get(name="credit-card", version=v2)
+
+# print the v1 data
+print(f"V1 Data asset URI: {data_asset_v1.path}")
+v1df = pd.read_csv(data_asset_v1.path)
+print(v1df.head(5))
+
+# print the v2 data
+print(
+ "_____________________________________________________________________________________________________________\n"
+)
+print(f"V2 Data asset URI: {data_asset_v2.path}")
+v2df = pd.read_parquet(data_asset_v2.path)
+print(v2df.head(5))
+```
+
+<!-- nbend -->
++++
+## Clean up resources
+
+If you plan to continue now to other tutorials, skip to [Next steps](#next-steps).
+
+### Stop compute instance
+
+If you're not going to use it now, stop the compute instance:
+
+1. In the studio, in the left navigation area, select **Compute**.
+1. In the top tabs, select **Compute instances**
+1. Select the compute instance in the list.
+1. On the top toolbar, select **Stop**.
+
+### Delete all resources
++
+## Next steps
+
+Read [Create data assets](how-to-create-data-assets.md) for more information about data assets.
+
+Read [Create datastores](how-to-datastore.md) to learn more about datastores.
+
+Continue with tutorials to learn how to develop a training script.
+
+> [!div class="nextstepaction"]
+> [Model development on a cloud workstation](tutorial-cloud-workstation.md)
+>
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Title: "Tutorial: ML pipelines with Python SDK v2"
-description: Use Azure Machine Learning to create your production-ready ML project in a cloud-based Python Jupyter Notebook using Azure Machine Learning Python SDK v2.
+description: Use Azure Machine Learning to create your production-ready ML project in a cloud-based Python Jupyter Notebook using Azure Machine Learning Python SDK v2.
- Previously updated : 11/21/2022+ Last updated : 03/15/2023 #Customer intent: This tutorial is intended to introduce Azure Machine Learning to data scientists who want to scale up or publish their ML projects. By completing a familiar end-to-end project, which starts by loading the data and ends by creating and calling an online inference endpoint, the user should become familiar with the core concepts of Azure Machine Learning and their most common usage. Each step of this tutorial can be modified or performed in other ways that might have security or scalability advantages. We will cover some of those in the Part II of this tutorial, however, we suggest the reader use the provide links in each section to learn more on each topic.
-# Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook
+# Tutorial: Create production machine learning pipelines
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!NOTE] > For a tutorial that uses SDK v1 to build a pipeline, see [Tutorial: Build an Azure Machine Learning pipeline for image classification](v1/tutorial-pipeline-python-sdk.md)
->
-In this tutorial, you'll use Azure Machine Learning to create a production ready machine learning (ML) project, using Azure Machine Learning Python SDK v2.
+The core of a machine learning pipeline is to split a complete machine learning task into a multistep workflow. Each step is a manageable component that can be developed, optimized, configured, and automated individually. Steps are connected through well-defined interfaces. The Azure Machine Learning pipeline service automatically orchestrates all the dependencies between pipeline steps. The benefits of using a pipeline are standardized the MLOps practice, scalable team collaboration, training efficiency and cost reduction. To learn more about the benefits of pipelines, see [What are Azure Machine Learning pipelines](concept-ml-pipelines.md).
-You'll learn how to use the Azure Machine Learning Python SDK v2 to:
+In this tutorial, you use Azure Machine Learning to create a production ready machine learning project, using Azure Machine Learning Python SDK v2.
+
+This means you will be able to leverage the Azure Machine Learning Python SDK to:
> [!div class="checklist"]
->
-> * Connect to your Azure Machine Learning workspace
-> * Create Azure Machine Learning data assets
-> * Create reusable Azure Machine Learning components
-> * Create, validate and run Azure Machine Learning pipelines
-> * Deploy the newly-trained model as an endpoint
-> * Call the Azure Machine Learning endpoint for inferencing
+> - Get a handle to your Azure Machine Learning workspace
+> - Create Azure Machine Learning data assets
+> - Create reusable Azure Machine Learning components
+> - Create, validate and run Azure Machine Learning pipelines
-## Prerequisites
+During this tutorial, you create an Azure Machine Learning pipeline to train a model for credit default prediction. The pipeline handles two steps:
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
- * Create a workspace.
- * Create a cloud-based compute instance to use for your development environment.
- * Create a cloud-based compute cluster to use for training your model.
-* Complete the [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md) to clone the **SDK v2/tutorials** folder.
+1. Data preparation
+1. Training and registering the trained model
+The next image shows a simple pipeline as you'll see it in the Azure studio once submitted.
-## Open the notebook
+The two steps are first data preparation and second training.
-1. Open the **tutorials** folder that was cloned into your **Files** section from the [Quickstart: Run Jupyter notebooks in studio](quickstart-run-notebooks.md).
-
-1. Select the **e2e-ml-workflow.ipynb** file from your **tutorials/azureml-examples/tutorials/e2e-ds-experience/** folder.
- :::image type="content" source="media/tutorial-pipeline-python-sdk/expand-folder.png" alt-text="Screenshot shows the open tutorials folder.":::
+## Prerequisites
-1. On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to use for running the notebook.
-> [!Important]
-> The rest of this article contains the same content as you see in the notebook.
->
-> Switch to the Jupyter Notebook now if you want to run the code while you read along.
-> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar
+1. [!INCLUDE [workspace](includes/prereq-workspace.md)]
-## Introduction
+1. [!INCLUDE [sign in](includes/prereq-sign-in.md)]
-In this tutorial, you'll create an Azure Machine Learning pipeline to train a model for credit default prediction. The pipeline handles the data preparation, training and registering the trained model. You'll then run the pipeline, deploy the model and use it.
+1. [!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)]
+ * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)]
+ * Or, open **tutorials/get-started-notebooks/pipeline.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)]
-The image below shows the pipeline as you'll see it in the Azure Machine Learning portal once submitted. It's a rather simple pipeline we'll use to walk you through the Azure Machine Learning SDK v2.
-The two steps are first data preparation and second training.
+<!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/pipeline.ipynb -->
## Set up the pipeline resources
-The Azure Machine Learning framework can be used from CLI, Python SDK, or studio interface. In this example, you'll use the Azure Machine Learning Python SDK v2 to create a pipeline.
+The Azure Machine Learning framework can be used from CLI, Python SDK, or studio interface. In this example, you use the Azure Machine Learning Python SDK v2 to create a pipeline.
-Before creating the pipeline, you'll set up the resources the pipeline will use:
+Before creating the pipeline, you need the following resources:
* The data asset for training * The software environment to run the pipeline
-* A compute resource to where the job will run
+* A compute resource to where the job runs
-## Connect to the workspace
+## Create handle to workspace
-Before we dive in the code, you'll need to connect to your Azure Machine Learning workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
+Before we dive in the code, you need a way to reference your workspace. You'll create `ml_client` for a handle to the workspace. You'll then use `ml_client` to manage resources and jobs.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=import-mlclient)]
+In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values:
-In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find your Subscription ID:
1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
-1. You'll see the values you need for **<SUBSCRIPTION_ID>**, **<RESOURCE_GROUP>**, and **<AML_WORKSPACE_NAME>**.
-1. Copy a value, then close the window and paste that into your code. Open the tool again to get the next value.
+1. Copy the value for workspace, resource group and subscription ID into the code.
+1. You'll need to copy one value, close the area and paste, then come back for the next one.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client)]
-The result is a handler to the workspace that you'll use to manage other resources and jobs.
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
-> [!IMPORTANT]
-> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (in the notebook below, that will happen during dataset registration).
+# authenticate
+credential = DefaultAzureCredential()
+# # Get a handle to the workspace
+ml_client = MLClient(
+ credential=credential,
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>",
+ workspace_name="<AML_WORKSPACE_NAME>",
+)
+```
+
+> [!NOTE]
+> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (this will happen when creating the `credit_data` data asset, two code cells from here).
## Register data from an external url
-The data you use for training is usually in one of the locations below:
+If you have been following along with the other tutorials in this series and already registered the data, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`. Then you may skip this section. To learn about data more in depth or if you would rather complete the data tutorial first, see [Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md).
+
+* Azure Machine Learning uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the next section, you consume some data from web url as one example. Data from other sources can be created as well. `Data` assets from other sources can be created as well.
++
-* Local machine
-* Web
-* Big Data Storage services (for example, Azure Blob, Azure Data Lake Storage, SQL)
-
-Azure Machine Learning uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the section below, you'll consume some data from web url as one example. Data from other sources can be created as well. `Data` assets from other sources can be created as well.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=credit_data)]
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
+
+web_path = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls"
+
+credit_data = Data(
+ name="creditcard_defaults",
+ path=web_path,
+ type=AssetTypes.URI_FILE,
+ description="Dataset for credit card defaults",
+ tags={"source_type": "web", "source": "UCI ML Repo"},
+ version="1.0.0",
+)
+```
This code just created a `Data` asset, ready to be consumed as an input by the pipeline that you'll define in the next sections. In addition, you can register the data to your workspace so it becomes reusable across pipelines.
-Registering the data asset will enable you to:
+Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you then see the dataset registration completion message.
+
-* Reuse and share the data asset in future pipelines
-* Use versions to track the modification to the data asset
-* Use the data asset from Azure Machine Learning designer, which is Azure Machine Learning's GUI for pipeline authoring
-Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you'll then see the dataset registration completion message.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-credit_data)]
+```python
+credit_data = ml_client.data.create_or_update(credit_data)
+print(
+ f"Dataset with name {credit_data.name} was registered to workspace, the dataset version is {credit_data.version}"
+)
+```
In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`.
In the future, you can fetch the same dataset from the workspace using `credit_d
Each step of an Azure Machine Learning pipeline can use a different compute resource for running the specific job of that step. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-In this section, you'll provision a Linux [compute cluster](how-to-create-attach-compute-cluster.md?tabs=python). See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/) .
-
-For this tutorial you only need a basic cluster, so we'll use a Standard_DS3_v2 model with 2 vCPU cores, 7 GB RAM and create an Azure Machine Learning Compute.
+In this section, you provision a Linux [compute cluster](how-to-create-attach-compute-cluster.md?tabs=python). See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/).
+For this tutorial, you only need a basic cluster so use a Standard_DS3_v2 model with 2 vCPU cores, 7-GB RAM and create an Azure Machine Learning Compute.
> [!TIP]
-> If you already have a compute cluster, replace "cpu-cluster" in the code below with the name of your cluster. This will keep you from creating another one.
-
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=cpu_cluster)]
+> If you already have a compute cluster, replace "cpu-cluster" in the next code block with the name of your cluster. This will keep you from creating another one.
+++
+```python
+from azure.ai.ml.entities import AmlCompute
+
+# Name assigned to the compute cluster
+cpu_compute_target = "cpu-cluster"
+
+try:
+ # let's see if the compute target already exists
+ cpu_cluster = ml_client.compute.get(cpu_compute_target)
+ print(
+ f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
+ )
+
+except Exception:
+ print("Creating a new cpu compute target...")
+
+ # Let's create the Azure Machine Learning compute object with the intended parameters
+ cpu_cluster = AmlCompute(
+ name=cpu_compute_target,
+ # Azure Machine Learning Compute is the on-demand VM service
+ type="amlcompute",
+ # VM Family
+ size="STANDARD_DS3_V2",
+ # Minimum running nodes when there is no job running
+ min_instances=0,
+ # Nodes in cluster
+ max_instances=4,
+ # How many seconds will the node running after the job termination
+ idle_time_before_scale_down=180,
+ # Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination
+ tier="Dedicated",
+ )
+ print(
+ f"AMLCompute with name {cpu_cluster.name} will be created, with compute size {cpu_cluster.size}"
+ )
+ # Now, we pass the object to MLClient's create_or_update method
+ cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster)
+```
## Create a job environment for pipeline steps
-So far, you've created a development environment on the compute instance, your development machine. You'll also need an environment to use for each step of the pipeline. Each step can have its own environment, or you can use some common environments for multiple steps.
+So far, you've created a development environment on the compute instance, your development machine. You also need an environment to use for each step of the pipeline. Each step can have its own environment, or you can use some common environments for multiple steps.
-In this example, you'll create a conda environment for your jobs, using a conda yaml file.
+In this example, you create a conda environment for your jobs, using a conda yaml file.
First, create a directory to store the file in.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=dependencies_dir)]
-Now, create the file in the dependencies directory.
+```python
+import os
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=conda.yaml)]
+dependencies_dir = "./dependencies"
+os.makedirs(dependencies_dir, exist_ok=True)
+```
+
+Now, create the file in the dependencies directory.
-The specification contains some usual packages, that you'll use in your pipeline (numpy, pip), together with some Azure Machine Learning specific packages (azureml-defaults, azureml-mlflow).
-The Azure Machine Learning packages aren't mandatory to run Azure Machine Learning jobs. However, adding these packages will let you interact with Azure Machine Learning for logging metrics and registering models, all inside the Azure Machine Learning job. You'll use them in the training script later in this tutorial.
+```python
+%%writefile {dependencies_dir}/conda.yml
+name: model-env
+channels:
+ - conda-forge
+dependencies:
+ - python=3.8
+ - numpy=1.21.2
+ - pip=21.2.4
+ - scikit-learn=0.24.2
+ - scipy=1.7.1
+ - pandas>=1.1,<1.2
+ - pip:
+ - inference-schema[numpy-support]==1.3.0
+ - xlrd==2.0.1
+ - mlflow== 1.26.1
+ - azureml-mlflow==1.42.0
+```
+
+The specification contains some usual packages, that you use in your pipeline (numpy, pip), together with some Azure Machine Learning specific packages (azureml-mlflow).
+
+The Azure Machine Learning packages aren't mandatory to run Azure Machine Learning jobs. However, adding these packages let you interact with Azure Machine Learning for logging metrics and registering models, all inside the Azure Machine Learning job. You use them in the training script later in this tutorial.
Use the *yaml* file to create and register this custom environment in your workspace:
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=custom_env_name)]
++
+```python
+from azure.ai.ml.entities import Environment
+
+custom_env_name = "aml-scikit-learn"
+
+pipeline_job_env = Environment(
+ name=custom_env_name,
+ description="Custom environment for Credit Card Defaults pipeline",
+ tags={"scikit-learn": "0.24.2"},
+ conda_file=os.path.join(dependencies_dir, "conda.yml"),
+ image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
+ version="0.1.0",
+)
+pipeline_job_env = ml_client.environments.create_or_update(pipeline_job_env)
+
+print(
+ f"Environment with name {pipeline_job_env.name} is registered to workspace, the environment version is {pipeline_job_env.version}"
+)
+```
## Build the training pipeline
-Now that you have all assets required to run your pipeline, it's time to build the pipeline itself, using the Azure Machine Learning Python SDK v2.
+Now that you have all assets required to run your pipeline, it's time to build the pipeline itself.
Azure Machine Learning pipelines are reusable ML workflows that usually consist of several components. The typical life of a component is:
-* Write the yaml specification of the component, or create it programmatically using `ComponentMethod`.
-* Optionally, register the component with a name and version in your workspace, to make it reusable and shareable.
-* Load that component from the pipeline code.
-* Implement the pipeline using the component's inputs, outputs and parameters
-* Submit the pipeline.
+- Write the yaml specification of the component, or create it programmatically using `ComponentMethod`.
+- Optionally, register the component with a name and version in your workspace, to make it reusable and shareable.
+- Load that component from the pipeline code.
+- Implement the pipeline using the component's inputs, outputs and parameters.
+- Submit the pipeline.
+
+There are two ways to create a component, programmatic and yaml definition. The next two sections walk you through creating a component both ways. You can either create the two components trying both options or pick your preferred method.
-## Create component 1: data prep (using programmatic definition)
+> [!NOTE]
+> In this tutorial for simplicity we are using the same compute for all components. However, you can set different computes for each component, for example by adding a line like `train_step.compute = "cpu-cluster"`. To view an example of building a pipeline with different computes for each component, see the [Basic pipeline job section in the cifar-10 pipeline tutorial](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/2b_train_cifar_10_with_pytorch/train_cifar_10_with_pytorch.ipynb).
+
+### Create component 1: data prep (using programmatic definition)
Let's start by creating the first component. This component handles the preprocessing of the data. The preprocessing task is performed in the *data_prep.py* Python file. First create a source folder for the data_prep component:
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=data_prep_src_dir)]
-This script performs the simple task of splitting the data into train and test datasets.
-Azure Machine Learning mounts datasets as folders to the computes, therefore, we created an auxiliary `select_first_file` function to access the data file inside the mounted input folder.
+```python
+import os
+
+data_prep_src_dir = "./components/data_prep"
+os.makedirs(data_prep_src_dir, exist_ok=True)
+```
+
+This script performs the simple task of splitting the data into train and test datasets. Azure Machine Learning mounts datasets as folders to the computes, therefore, we created an auxiliary `select_first_file` function to access the data file inside the mounted input folder.
+
+[MLFlow](concept-mlflow.md) is used to log the parameters and metrics during our pipeline run.
++
+```python
+%%writefile {data_prep_src_dir}/data_prep.py
+import os
+import argparse
+import pandas as pd
+from sklearn.model_selection import train_test_split
+import logging
+import mlflow
++
+def main():
+ """Main function of the script."""
+
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--data", type=str, help="path to input data")
+ parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
+ parser.add_argument("--train_data", type=str, help="path to train data")
+ parser.add_argument("--test_data", type=str, help="path to test data")
+ args = parser.parse_args()
+
+ # Start Logging
+ mlflow.start_run()
+
+ print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
+
+ print("input data:", args.data)
+
+ credit_df = pd.read_excel(args.data, header=1, index_col=0)
+
+ mlflow.log_metric("num_samples", credit_df.shape[0])
+ mlflow.log_metric("num_features", credit_df.shape[1] - 1)
+
+ credit_train_df, credit_test_df = train_test_split(
+ credit_df,
+ test_size=args.test_train_ratio,
+ )
+
+ # output paths are mounted as folder, therefore, we are adding a filename to the path
+ credit_train_df.to_csv(os.path.join(args.train_data, "data.csv"), index=False)
+
+ credit_test_df.to_csv(os.path.join(args.test_data, "data.csv"), index=False)
-[MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.
+ # Stop Logging
+ mlflow.end_run()
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=def-main)]
-Now that you have a script that can perform the desired task, create an Azure Machine Learning Component from it.
+if __name__ == "__main__":
+ main()
+```
-You'll use the general purpose **CommandComponent** that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}` notation.
+Now that you have a script that can perform the desired task, create an Azure Machine Learning Component from it.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=data_prep_component)]
+Use the general purpose `CommandComponent` that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}` notation.
-Optionally, register the component in the workspace for future re-use.
-## Create component 2: training (using yaml definition)
+```python
+from azure.ai.ml import command
+from azure.ai.ml import Input, Output
-The second component that you'll create will consume the training and test data, train a tree based model and return the output model. You'll use Azure Machine Learning logging capabilities to record and visualize the learning progress.
+data_prep_component = command(
+ name="data_prep_credit_defaults",
+ display_name="Data preparation for training",
+ description="reads a .xl input, split the input to train and test",
+ inputs={
+ "data": Input(type="uri_folder"),
+ "test_train_ratio": Input(type="number"),
+ },
+ outputs=dict(
+ train_data=Output(type="uri_folder", mode="rw_mount"),
+ test_data=Output(type="uri_folder", mode="rw_mount"),
+ ),
+ # The source folder of the component
+ code=data_prep_src_dir,
+ command="""python data_prep.py \
+ --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}} \
+ --train_data ${{outputs.train_data}} --test_data ${{outputs.test_data}} \
+ """,
+ environment=f"{pipeline_job_env.name}:{pipeline_job_env.version}",
+)
+```
-You used the `CommandComponent` class to create your first component. This time you'll use the yaml definition to define the second component. Each method has its own advantages. A yaml definition can actually be checked-in along the code, and would provide a readable history tracking. The programmatic method using `CommandComponent` can be easier with built-in class documentation and code completion.
+Optionally, register the component in the workspace for future reuse.
+
+```python
+# Now we register the component to the workspace
+data_prep_component = ml_client.create_or_update(data_prep_component.component)
+
+# Create (register) the component in your workspace
+print(
+ f"Component {data_prep_component.name} with Version {data_prep_component.version} is registered"
+)
+```
+
+### Create component 2: training (using yaml definition)
+
+The second component that you create consumes the training and test data, train a tree based model and return the output model. Use Azure Machine Learning logging capabilities to record and visualize the learning progress.
+
+You used the `CommandComponent` class to create your first component. This time you use the yaml definition to define the second component. Each method has its own advantages. A yaml definition can actually be checked-in along the code, and would provide a readable history tracking. The programmatic method using `CommandComponent` can be easier with built-in class documentation and code completion.
+ Create the directory for this component:
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train_src_dir)]
+
+```python
+import os
+
+train_src_dir = "./components/train"
+os.makedirs(train_src_dir, exist_ok=True)
+```
Create the training script in the directory:
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train.py)]
-As you can see in this training script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
+```python
+%%writefile {train_src_dir}/train.py
+import argparse
+from sklearn.ensemble import GradientBoostingClassifier
+from sklearn.metrics import classification_report
+import os
+import pandas as pd
+import mlflow
-For the environment of this step, you'll use one of the built-in (curated) Azure Machine Learning environments. The tag `azureml`, tells the system to use look for the name in curated environments.
-First, create the *yaml* file describing the component:
+def select_first_file(path):
+ """Selects first file in folder, use under assumption there is only one file in folder
+ Args:
+ path (str): path to directory or file to choose
+ Returns:
+ str: full path of selected file
+ """
+ files = os.listdir(path)
+ return os.path.join(path, files[0])
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train.yml)]
-Now create and register the component:
+# Start Logging
+mlflow.start_run()
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=train_component)]
+# enable autologging
+mlflow.sklearn.autolog()
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-train_component)]
+os.makedirs("./outputs", exist_ok=True)
-## Create the pipeline from components
-Now that both your components are defined and registered, you can start implementing the pipeline.
+def main():
+ """Main function of the script."""
-Here, you'll use *input data*, *split ratio* and *registered model name* as input variables. Then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--train_data", type=str, help="path to train data")
+ parser.add_argument("--test_data", type=str, help="path to test data")
+ parser.add_argument("--n_estimators", required=False, default=100, type=int)
+ parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
+ parser.add_argument("--registered_model_name", type=str, help="model name")
+ parser.add_argument("--model", type=str, help="path to model file")
+ args = parser.parse_args()
-The Python functions returned by `load_component()` work as any regular Python function that we'll use within a pipeline to call each step.
+ # paths are mounted as folder, therefore, we are selecting the file from folder
+ train_df = pd.read_csv(select_first_file(args.train_data))
-To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifies the Azure Machine Learning pipelines. In the decorator, we can specify the pipeline description and default resources like compute and storage. Like a Python function, pipelines can have inputs. You can then create multiple instances of a single pipeline with different inputs.
+ # Extracting the label column
+ y_train = train_df.pop("default payment next month")
-Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
+ # convert the dataframe values to array
+ X_train = train_df.values
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=pipeline)]
+ # paths are mounted as folder, therefore, we are selecting the file from folder
+ test_df = pd.read_csv(select_first_file(args.test_data))
-Now use your pipeline definition to instantiate a pipeline with your dataset, split rate of choice and the name you picked for your model.
+ # Extracting the label column
+ y_test = test_df.pop("default payment next month")
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=registered_model_name)]
+ # convert the dataframe values to array
+ X_test = test_df.values
-## Submit the job
+ print(f"Training with data of shape {X_train.shape}")
-It's now time to submit the job to run in Azure Machine Learning. This time you'll use `create_or_update` on `ml_client.jobs`.
+ clf = GradientBoostingClassifier(
+ n_estimators=args.n_estimators, learning_rate=args.learning_rate
+ )
+ clf.fit(X_train, y_train)
-Here you'll also pass an experiment name. An experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure Machine Learning studio.
+ y_pred = clf.predict(X_test)
-Once completed, the pipeline will register a model in your workspace as a result of training.
+ print(classification_report(y_test, y_pred))
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=returned_job)]
+ # Registering the model to the workspace
+ print("Registering the model via MLFlow")
+ mlflow.sklearn.log_model(
+ sk_model=clf,
+ registered_model_name=args.registered_model_name,
+ artifact_path=args.registered_model_name,
+ )
-An output of "False" is expected from the above cell. You can track the progress of your pipeline, by using the link generated in the cell above.
+ # Saving the model to a file
+ mlflow.sklearn.save_model(
+ sk_model=clf,
+ path=os.path.join(args.model, "trained_model"),
+ )
-When you select on each component, you'll see more information about the results of that component.
-There are two important parts to look for at this stage:
-* `Outputs+logs` > `user_logs` > `std_log.txt`
-This section shows the script run sdtout.
+ # Stop Logging
+ mlflow.end_run()
- :::image type="content" source="media/tutorial-pipeline-python-sdk/user-logs.jpg" alt-text="Screenshot of std_log.txt." lightbox="media/tutorial-pipeline-python-sdk/user-logs.jpg":::
+if __name__ == "__main__":
+ main()
+```
-* `Outputs+logs` > `Metric`
-This section shows different logged metrics. In this example. mlflow `autologging`, has automatically logged the training metrics.
+As you can see in this training script, once the model is trained, the model file is saved and registered to the workspace. Now you can use the registered model in inferencing endpoints.
- :::image type="content" source="media/tutorial-pipeline-python-sdk/metrics.jpg" alt-text="Screenshot shows logged metrics.txt." lightbox="media/tutorial-pipeline-python-sdk/metrics.jpg":::
+For the environment of this step, you use one of the built-in (curated) Azure Machine Learning environments. The tag `azureml`, tells the system to use look for the name in curated environments.
+First, create the *yaml* file describing the component:
-## Deploy the model as an online endpoint
-Now deploy your machine learning model as a web service in the Azure cloud, an [`online endpoint`](concept-endpoints.md).
-To deploy a machine learning service, you usually need:
+```python
+%%writefile {train_src_dir}/train.yml
+# <component>
+name: train_credit_defaults_model
+display_name: Train Credit Defaults Model
+# version: 1 # Not specifying a version will automatically update the version
+type: command
+inputs:
+ train_data:
+ type: uri_folder
+ test_data:
+ type: uri_folder
+ learning_rate:
+ type: number
+ registered_model_name:
+ type: string
+outputs:
+ model:
+ type: uri_folder
+code: .
+environment:
+ # for this step, we'll use an AzureML curate environment
+ azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1
+command: >-
+ python train.py
+ --train_data ${{inputs.train_data}}
+ --test_data ${{inputs.test_data}}
+ --learning_rate ${{inputs.learning_rate}}
+ --registered_model_name ${{inputs.registered_model_name}}
+ --model ${{outputs.model}}
+# </component>
+
+```
+
+Now create and register the component. Registering it allows you to re-use it in other pipelines. Also, anyone else with access to your workspace can use the registered component.
++
+```python
+# importing the Component Package
+from azure.ai.ml import load_component
+
+# Loading the component from the yml file
+train_component = load_component(source=os.path.join(train_src_dir, "train.yml"))
+
+# Now we register the component to the workspace
+train_component = ml_client.create_or_update(train_component)
+
+# Create (register) the component in your workspace
+print(
+ f"Component {train_component.name} with Version {train_component.version} is registered"
+)
+```
+
+### Create the pipeline from components
-* The model assets (filed, metadata) that you want to deploy. You've already registered these assets in your training component.
-* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns. When using a MLFlow model, as in this tutorial, this script is automatically created for you
+Now that both your components are defined and registered, you can start implementing the pipeline.
-## Create a new online endpoint
-Now that you have a registered model and an inference script, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier).
+Here, you use *input data*, *split ratio* and *registered model name* as input variables. Then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=online_endpoint_name)]
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=endpoint)]
+The Python functions returned by `load_component()` work as any regular Python function that we use within a pipeline to call each step.
-Once you've created an endpoint, you can retrieve it as below:
+To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifies the Azure Machine Learning pipelines. In the decorator, we can specify the pipeline description and default resources like compute and storage. Like a Python function, pipelines can have inputs. You can then create multiple instances of a single pipeline with different inputs.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=update-endpoint)]
+Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
-## Deploy the model to the endpoint
-Once the endpoint is created, deploy the model with the entry script. Each endpoint can have multiple deployments and direct traffic to these deployments can be specified using rules. Here you'll create a single deployment that handles 100% of the incoming traffic. We have chosen a color name for the deployment, for example, *blue*, *green*, *red* deployments, which is arbitrary.
+```python
+# the dsl decorator tells the sdk that we are defining an Azure Machine Learning pipeline
+from azure.ai.ml import dsl, Input, Output
++
+@dsl.pipeline(
+ compute=cpu_compute_target,
+ description="E2E data_perp-train pipeline",
+)
+def credit_defaults_pipeline(
+ pipeline_job_data_input,
+ pipeline_job_test_train_ratio,
+ pipeline_job_learning_rate,
+ pipeline_job_registered_model_name,
+):
+ # using data_prep_function like a python call with its own inputs
+ data_prep_job = data_prep_component(
+ data=pipeline_job_data_input,
+ test_train_ratio=pipeline_job_test_train_ratio,
+ )
+
+ # using train_func like a python call with its own inputs
+ train_job = train_component(
+ train_data=data_prep_job.outputs.train_data, # note: using outputs from previous step
+ test_data=data_prep_job.outputs.test_data, # note: using outputs from previous step
+ learning_rate=pipeline_job_learning_rate, # note: using a pipeline input as parameter
+ registered_model_name=pipeline_job_registered_model_name,
+ )
+
+ # a pipeline returns a dictionary of outputs
+ # keys will code for the pipeline output identifier
+ return {
+ "pipeline_job_train_data": data_prep_job.outputs.train_data,
+ "pipeline_job_test_data": data_prep_job.outputs.test_data,
+ }
+```
-You can check the *Models* page on the Azure Machine Learning studio, to identify the latest version of your registered model. Alternatively, the code below will retrieve the latest version number for you to use.
+Now use your pipeline definition to instantiate a pipeline with your dataset, split rate of choice and the name you picked for your model.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=latest_model_version)]
-Deploy the latest version of the model.
+```python
+registered_model_name = "credit_defaults_model"
-> [!NOTE]
-> Expect this deployment to take approximately 6 to 8 minutes.
+# Let's instantiate the pipeline with the parameters of our choice
+pipeline = credit_defaults_pipeline(
+ pipeline_job_data_input=Input(type="uri_file", path=credit_data.path),
+ pipeline_job_test_train_ratio=0.25,
+ pipeline_job_learning_rate=0.05,
+ pipeline_job_registered_model_name=registered_model_name,
+)
+```
+
+## Submit the job
+
+It's now time to submit the job to run in Azure Machine Learning. This time you use `create_or_update` on `ml_client.jobs`.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=model)]
+Here you also pass an experiment name. An experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure Machine Learning studio.
-### Test with a sample query
+Once completed, the pipeline registers a model in your workspace as a result of training.
-Now that the model is deployed to the endpoint, you can run inference with it.
-Create a sample request file following the design expected in the run method in the score script.
+```python
+# submit the pipeline job
+pipeline_job = ml_client.jobs.create_or_update(
+ pipeline,
+ # Project's name
+ experiment_name="e2e_registered_components",
+)
+ml_client.jobs.stream(pipeline_job.name)
+```
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=sample-request.json)]
+You can track the progress of your pipeline, by using the link generated in the previous cell. When you first select this link, you may see that the pipeline is still running. Once it's complete, you can examine each component's results.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=write-sample-request)]
+Double-click the **Train Credit Defaults Model** component.
+
+There are two important results you'll want to see about training:
+
+* View your logs:
+ 1. Select the **Outputs+logs** tab.
+ 1. Open the folders to `user_logs` > `std_log.txt` This section shows the script run stdout.
+
+ :::image type="content" source="media/tutorial-pipeline-python-sdk/user-logs.jpg" alt-text="Screenshot of std_log.txt." lightbox="media/tutorial-pipeline-python-sdk/user-logs.jpg":::
+
+* View your metrics: Select the **Metrics** tab. This section shows different logged metrics. In this example. mlflow `autologging`, has automatically logged the training metrics.
+
+ :::image type="content" source="media/tutorial-pipeline-python-sdk/metrics.jpg" alt-text="Screenshot shows logged metrics.txt." lightbox="media/tutorial-pipeline-python-sdk/metrics.jpg":::
+
+## Deploy the model as an online endpoint
+To learn how to deploy your model to an online endpoint, see [Deploy a model as an online endpoint tutorial](tutorial-deploy-model.md).
++
+<!-- nbend -->
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client.online_endpoints.invoke)]
## Clean up resources
-If you're not going to use the endpoint, delete it to stop using the resource. Make sure no other deployments are using an endpoint before you delete it.
+If you plan to continue now to other tutorials, skip to [Next steps](#next-steps).
-> [!NOTE]
-> Expect this step to take approximately 6 to 8 minutes.
+### Stop compute instance
+
+If you're not going to use it now, stop the compute instance:
+
+1. In the studio, in the left navigation area, select **Compute**.
+1. In the top tabs, select **Compute instances**
+1. Select the compute instance in the list.
+1. On the top toolbar, select **Stop**.
+
+### Delete all resources
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=ml_client.online_endpoints.begin_delete)]
## Next steps > [!div class="nextstepaction"]
-> Learn more about [Azure Machine Learning logging](./how-to-use-mlflow-cli-runs.md).
+> Learn how to [Schedule machine learning pipeline jobs](how-to-schedule-pipeline-job.md)
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
#Customer intent: As a professional data scientist, I want to learn how to train an image classification model using TensorFlow and the Azure Machine Learning Visual Studio Code Extension.
-# Train an image classification TensorFlow model using the Azure Machine Learning Visual Studio Code Extension (preview)
+# Tutorial: Train an image classification TensorFlow model using the Azure Machine Learning Visual Studio Code Extension (preview)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
machine-learning Tutorial Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-model.md
+
+ Title: "Tutorial: Train a model"
+
+description: Dive in to the process of training a model
+++++++ Last updated : 03/15/2023
+#Customer intent: As a data scientist, I want to know how to prototype and develop machine learning models on a cloud workstation.
++
+# Tutorial: Train a model in Azure Machine Learning
++
+Learn how a data scientist uses Azure Machine Learning to train a model. In this example, we use the associated credit card dataset to show how you can use Azure Machine Learning for a classification problem. The goal is to predict if a customer has a high likelihood of defaulting on a credit card payment.
+
+The training script handles the data preparation, then trains and registers a model. This tutorial takes you through steps to submit a cloud-based training job (command job). If you would like to learn more about how to load your data into Azure, see [Tutorial: Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md).
+The steps are:
+
+> [!div class="checklist"]
+> * Get a handle to your Azure Machine Learning workspace
+> * Create your compute resource and job environment
+> * Create your training script
+> * Create and run your command job to run the training script on the compute resource, configured with the appropriate job environment and the data source
+> * View the output of your training script
+> * Deploy the newly-trained model as an endpoint
+> * Call the Azure Machine Learning endpoint for inferencing
+
+## Prerequisites
+
+1. [!INCLUDE [workspace](includes/prereq-workspace.md)]
+
+1. [!INCLUDE [sign in](includes/prereq-sign-in.md)]
+
+1. [!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)]
+ * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)]
+ * Or, open **tutorials/get-started-notebooks/train-model.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)]
+++
+<!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/train-model.ipynb -->
+
+## Use a command job to train a model in Azure Machine Learning
+
+To train a model, you need to submit a *job*. The type of job you'll submit in this tutorial is a *command job*. Azure Machine Learning offers several different types of jobs to train models. Users can select their method of training based on complexity of the model, data size, and training speed requirements. In this tutorial, you'll learn how to submit a *command job* to run a *training script*.
+
+A command job is a function that allows you to submit a custom training script to train your model. This can also be defined as a custom training job. A command job in Azure Machine Learning is a type of job that runs a script or command in a specified environment. You can use command jobs to train models, process data, or any other custom code you want to execute in the cloud.
+
+In this tutorial, we'll focus on using a command job to create a custom training job that we'll use to train a model. For any custom training job, the below items are required:
+
+* compute resource (usually a compute cluster, which we recommend for scalability)
+* environment
+* data
+* command job
+* training script
++
+In this tutorial we'll provide all these items for our example: creating a classifier to predict customers who have a high likelihood of defaulting on credit card payments.
++
+## Create handle to workspace
+
+Before we dive in the code, you need a way to reference your workspace. You'll create `ml_client` for a handle to the workspace. You'll then use `ml_client` to manage resources and jobs.
+
+In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values:
+
+1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+1. Copy the value for workspace, resource group and subscription ID into the code.
+1. You'll need to copy one value, close the area and paste, then come back for the next one.
++
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+# authenticate
+credential = DefaultAzureCredential()
+# # Get a handle to the workspace
+ml_client = MLClient(
+ credential=credential,
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>",
+ workspace_name="<AML_WORKSPACE_NAME>",
+)
+```
+
+> [!NOTE]
+> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (this will happen in the next code cell).
+
+## Create a compute cluster to run your job
+
+In Azure, a job can refer to several tasks that Azure allows its users to do: training, pipeline creation, deployment, etc. For this tutorial and our purpose of training a machine learning model, we'll use *job* as a reference to running training computations (*training job*).
+
+You need a compute resource for running any job in Azure Machine Learning. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark. In Azure, there are two compute resources that you can choose from: instance and cluster. A compute instance contains one node of computation resources while a *compute cluster* contains several. A *compute cluster* contains more memory for the computation task. For training, we recommend using a compute cluster because it allows the user to distribute calculations on multiple nodes of computation, which results in a faster training experience.
+
+You provision a Linux compute cluster. See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/) .
+
+For this example, you only need a basic cluster, so you use a Standard_DS3_v2 model with 2 vCPU cores, 7-GB RAM.
++
+```python
+from azure.ai.ml.entities import AmlCompute
+
+# Name assigned to the compute cluster
+cpu_compute_target = "cpu-cluster"
+
+try:
+ # let's see if the compute target already exists
+ cpu_cluster = ml_client.compute.get(cpu_compute_target)
+ print(
+ f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
+ )
+
+except Exception:
+ print("Creating a new cpu compute target...")
+
+ # Let's create the Azure Machine Learning compute object with the intended parameters
+ cpu_cluster = AmlCompute(
+ name=cpu_compute_target,
+ # Azure Machine Learning Compute is the on-demand VM service
+ type="amlcompute",
+ # VM Family
+ size="STANDARD_DS3_V2",
+ # Minimum running nodes when there is no job running
+ min_instances=0,
+ # Nodes in cluster
+ max_instances=4,
+ # How many seconds will the node running after the job termination
+ idle_time_before_scale_down=180,
+ # Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination
+ tier="Dedicated",
+ )
+ print(
+ f"AMLCompute with name {cpu_cluster.name} will be created, with compute size {cpu_cluster.size}"
+ )
+ # Now, we pass the object to MLClient's create_or_update method
+ cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster)
+```
+
+## Create a job environment
+
+To run your Azure Machine Learning job on your compute resource, you need an [environment](concept-environments.md). An environment lists the software runtime and libraries that you want installed on the compute where youΓÇÖll be training. It's similar to your python environment on your local machine.
+
+Azure Machine Learning provides many curated or ready-made environments, which are useful for common training and inference scenarios.
+
+In this example, you'll create a custom conda environment for your jobs, using a conda yaml file.
+
+First, create a directory to store the file in.
++
+```python
+import os
+
+dependencies_dir = "./dependencies"
+os.makedirs(dependencies_dir, exist_ok=True)
+```
+
+The cell below uses IPython magic to write the conda file into the directory you just created.
++
+```python
+%%writefile {dependencies_dir}/conda.yml
+name: model-env
+channels:
+ - conda-forge
+dependencies:
+ - python=3.8
+ - numpy=1.21.2
+ - pip=21.2.4
+ - scikit-learn=0.24.2
+ - scipy=1.7.1
+ - pandas>=1.1,<1.2
+ - pip:
+ - inference-schema[numpy-support]==1.3.0
+ - mlflow== 1.26.1
+ - azureml-mlflow==1.42.0
+ - psutil>=5.8,<5.9
+ - tqdm>=4.59,<4.60
+ - ipykernel~=6.0
+ - matplotlib
+```
++
+The specification contains some usual packages, that you'll use in your job (numpy, pip).
+
+Reference this *yaml* file to create and register this custom environment in your workspace:
++
+```python
+from azure.ai.ml.entities import Environment
+
+custom_env_name = "aml-scikit-learn"
+
+custom_job_env = Environment(
+ name=custom_env_name,
+ description="Custom environment for Credit Card Defaults job",
+ tags={"scikit-learn": "0.24.2"},
+ conda_file=os.path.join(dependencies_dir, "conda.yml"),
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+)
+custom_job_env = ml_client.environments.create_or_update(custom_job_env)
+
+print(
+ f"Environment with name {custom_job_env.name} is registered to workspace, the environment version is {custom_job_env.version}"
+)
+```
+
+## Configure a training job using the command function
+
+You create an Azure Machine Learning *command job* to train a model for credit default prediction. The command job runs a *training script* in a specified environment on a specified compute resource. You've already created the environment and the compute cluster. Next you'll create the training script. In our specific case, we're training our dataset to produce a classifier using the `GradientBoostingClassifier` model.
+
+The *training script* handles the data preparation, training and registering of the trained model. The method `train_test_split` handles splitting the dataset into test and training data. In this tutorial, you'll create a Python training script.
+
+Command jobs can be run from CLI, Python SDK, or studio interface. In this tutorial, you'll use the Azure Machine Learning Python SDK v2 to create and run the command job.
+
+## Create training script
+
+Let's start by creating the training script - the *main.py* python file.
+
+First create a source folder for the script:
++
+```python
+import os
+
+train_src_dir = "./src"
+os.makedirs(train_src_dir, exist_ok=True)
+```
+
+This script handles the preprocessing of the data, splitting it into test and train data. It then consumes this data to train a tree based model and return the output model.
+
+[MLFlow](concept-mlflow.md) is used to log the parameters and metrics during our job. The MLFlow package allows you to keep track of metrics and results for each model Azure trains. We'll be using MLFlow to first get the best model for our data, then we'll view the model's metrics on the Azure studio.
++
+```python
+%%writefile {train_src_dir}/main.py
+import os
+import argparse
+import pandas as pd
+import mlflow
+import mlflow.sklearn
+from sklearn.ensemble import GradientBoostingClassifier
+from sklearn.metrics import classification_report
+from sklearn.model_selection import train_test_split
+
+def main():
+ """Main function of the script."""
+
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--data", type=str, help="path to input data")
+ parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
+ parser.add_argument("--n_estimators", required=False, default=100, type=int)
+ parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
+ parser.add_argument("--registered_model_name", type=str, help="model name")
+ args = parser.parse_args()
+
+ # Start Logging
+ mlflow.start_run()
+
+ # enable autologging
+ mlflow.sklearn.autolog()
+
+ ###################
+ #<prepare the data>
+ ###################
+ print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
+
+ print("input data:", args.data)
+
+ credit_df = pd.read_csv(args.data, header=1, index_col=0)
+
+ mlflow.log_metric("num_samples", credit_df.shape[0])
+ mlflow.log_metric("num_features", credit_df.shape[1] - 1)
+
+ #Split train and test datasets
+ train_df, test_df = train_test_split(
+ credit_df,
+ test_size=args.test_train_ratio,
+ )
+ ####################
+ #</prepare the data>
+ ####################
+
+ ##################
+ #<train the model>
+ ##################
+ # Extracting the label column
+ y_train = train_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_train = train_df.values
+
+ # Extracting the label column
+ y_test = test_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_test = test_df.values
+
+ print(f"Training with data of shape {X_train.shape}")
+
+ clf = GradientBoostingClassifier(
+ n_estimators=args.n_estimators, learning_rate=args.learning_rate
+ )
+ clf.fit(X_train, y_train)
+
+ y_pred = clf.predict(X_test)
+
+ print(classification_report(y_test, y_pred))
+ ###################
+ #</train the model>
+ ###################
+
+ ##########################
+ #<save and register model>
+ ##########################
+ # Registering the model to the workspace
+ print("Registering the model via MLFlow")
+ mlflow.sklearn.log_model(
+ sk_model=clf,
+ registered_model_name=args.registered_model_name,
+ artifact_path=args.registered_model_name,
+ )
+
+ # Saving the model to a file
+ mlflow.sklearn.save_model(
+ sk_model=clf,
+ path=os.path.join(args.registered_model_name, "trained_model"),
+ )
+ ###########################
+ #</save and register model>
+ ###########################
+
+ # Stop Logging
+ mlflow.end_run()
+
+if __name__ == "__main__":
+ main()
+```
+
+In this script, once the model is trained, the model file is saved and registered to the workspace. Registering your model allows you to store and version your models in the Azure cloud, in your workspace. Once you register a model, you can find all other registered model in one place in the Azure Studio called the model registry. The model registry helps you organize and keep track of your trained models.
+
+## Configure the command
+
+Now that you have a script that can perform the classification task, use the general purpose **command** that can run command line actions. This command line action can be directly calling system commands or by running a script.
+
+Here, create input variables to specify the input data, split ratio, learning rate and registered model name. The command script will:
+* Use the compute created earlier to run this command.
+* Use the environment created earlier - you can use the `@latest` notation to indicate the latest version of the environment when the command is run.
+* Configure the command line action itself - `python main.py` in this case. The inputs/outputs are accessible in the command via the `${{ ... }}` notation.
++
+```python
+from azure.ai.ml import command
+from azure.ai.ml import Input
+
+registered_model_name = "credit_defaults_model"
+
+job = command(
+ inputs=dict(
+ data=Input(
+ type="uri_file",
+ path="https://azuremlexamples.blob.core.windows.net/datasets/credit_card/default_of_credit_card_clients.csv",
+ ),
+ test_train_ratio=0.2,
+ learning_rate=0.25,
+ registered_model_name=registered_model_name,
+ ),
+ code="./src/", # location of source code
+ command="python main.py --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}} --learning_rate ${{inputs.learning_rate}} --registered_model_name ${{inputs.registered_model_name}}",
+ environment="aml-scikit-learn@latest",
+ compute="cpu-cluster",
+ display_name="credit_default_prediction",
+)
+```
+
+## Submit the job
+
+It's now time to submit the job to run in Azure Machine Learning studio. This time you'll use `create_or_update` on `ml_client`. `ml_client` is a client class that allows you to connect to your Azure subscription using Python and interact with Azure Machine Learning services. `ml_client` allows you to submit your jobs using Python.
++
+```python
+ml_client.create_or_update(job)
+```
+
+## View job output and wait for job completion
+
+View the job in Azure Machine Learning studio by selecting the link in the output of the previous cell. The output of this job will look like this in the Azure Machine Learning studio. Explore the tabs for various details like metrics, outputs etc. Once completed, the job will register a model in your workspace as a result of training.
++
+> [!IMPORTANT]
+> Wait until the status of the job is complete before returning to this notebook to continue. The job will take 2 to 3 minutes to run. It could take longer (up to 10 minutes) if the compute cluster has been scaled down to zero nodes and custom environment is still building.
+
+When you run the cell, the notebook output shows a link to the job's details page on Azure Studio. Alternatively, you can also select Jobs on the left navigation menu. A job is a grouping of many runs from a specified script or piece of code. Information for the run is stored under that job. The details page gives an overview of the job, the time it took to run, when it was created, etc. The page also has tabs to other information about the job such as metrics, Outputs + logs, and code. Listed below are the tabs available in the job's details page:
+
+* Overview: The overview section provides basic information about the job, including its status, start and end times, and the type of job that was run
+* Inputs: The input section lists the data and code that were used as inputs for the job. This section can include datasets, scripts, environment configurations, and other resources that were used during training.
+* Outputs + logs: The Outputs + logs tab contains logs generated while the job was running. This tab assists in troubleshooting if anything goes wrong with your training script or model creation.
+* Metrics: The metrics tab showcases key performance metrics from your model such as training score, f1 score, and precision score.
+
+<!-- nbend -->
++++
+## Clean up resources
+
+If you plan to continue now to other tutorials, skip to [Next steps](#next-steps).
+
+### Stop compute instance
+
+If you're not going to use it now, stop the compute instance:
+
+1. In the studio, in the left navigation area, select **Compute**.
+1. In the top tabs, select **Compute instances**
+1. Select the compute instance in the list.
+1. On the top toolbar, select **Stop**.
+
+### Delete all resources
+++
+## Next Steps
+
+Learn about deploying a model
+
+> [!div class="nextstepaction"]
+> [Deploy a model](tutorial-deploy-model.md).
+
+This tutorial used an online data file. To learn more about other ways to access data, see [Tutorial: Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md).
+
+If you would like to learn more about different ways to train models in Azure Machine Learning, see [What is automated machine learning (AutoML)?](concept-automated-ml.md). Automated ML is a supplemental tool to reduce the amount of time a data scientist spends finding a model that works best with their data.
+
+If you would like more examples similar to this tutorial, see [**Samples**](quickstart-create-resources.md#learn-from-sample-notebooks) section of studio. These same samples are available at our [GitHub examples page.](https://github.com/Azure/azureml-examples) The examples include complete Python Notebooks that you can run code and learn to train a model. You can modify and run existing scripts from the samples, containing scenarios including classification, natural language processing, and anomaly detection.
machine-learning How To Configure Environment V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-environment-v1.md
The Azure Machine Learning [compute instance](../concept-compute-instance.md) is
There is nothing to install or configure for a compute instance.
-Create one anytime from within your Azure Machine Learning workspace. Provide just a name and specify an Azure VM type. Try it now with this [Tutorial: Setup environment and workspace](../quickstart-create-resources.md).
+Create one anytime from within your Azure Machine Learning workspace. Provide just a name and specify an Azure VM type. Try it now with [Create resources to get started](../quickstart-create-resources.md).
To learn more about compute instances, including how to install packages, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md).
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md
Learn how to use Azure Machine Learning to deploy a model as a web service on yo
## Prerequisites -- An Azure Machine Learning workspace with a compute instance running. For more information, see [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md).
+- An Azure Machine Learning workspace with a compute instance running. For more information, see [Create resources to get started](../quickstart-create-resources.md).
## Deploy to the compute instances
machine-learning How To Extend Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-extend-prebuilt-docker-image-inference.md
Using a Dockerfile allows for full customization of the image before deployment.
The main tradeoff for this approach is that an extra image build will take place during deployment, which slows down the deployment process. If you can use the [Python package extensibility](./how-to-prebuilt-docker-images-inference-python-extensibility.md) method, deployment will be faster. ## Prerequisites
-* An Azure Machine Learning workspace. For a tutorial on creating a workspace, see [Get started with Azure Machine Learning](../quickstart-create-resources.md).
+* An Azure Machine Learning workspace. For a tutorial on creating a workspace, see [Create resources to get started](../quickstart-create-resources.md).
* Familiarity with authoring a [Dockerfile](https://docs.docker.com/engine/reference/builder/). * Either a local working installation of [Docker](https://www.docker.com/), including the `docker` CLI, **OR** an Azure Container Registry (ACR) associated with your Azure Machine Learning workspace.
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-tensorboard.md
How you launch TensorBoard with Azure Machine Learning experiments depends on th
* To launch TensorBoard and view your experiment job histories, your experiments need to have previously enabled logging to track its metrics and performance. * The code in this document can be run in either of the following environments: * Azure Machine Learning compute instance - no downloads or installation necessary
- * Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ * Complete [Create resources to get started](../quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
* In the samples folder on the notebook server, find two completed and expanded notebooks by navigating to these directories: * **SDK v1 > how-to-use-azureml > track-and-monitor-experiments > tensorboard > export-run-history-to-tensorboard > export-run-history-to-tensorboard.ipynb** * **SDK v1 > how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb**
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md
Azure Machine Learning provides a default Docker base image. You can also use Az
Run the code on either of these environments: * Azure Machine Learning compute instance (no downloads or installation necessary):
- * Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) tutorial to create a dedicated notebook server preloaded with the SDK and the sample repository.
+ * Complete the [Create resources to get started](../quickstart-create-resources.md) tutorial to create a dedicated notebook server preloaded with the SDK and the sample repository.
* Your own Jupyter Notebook server: * Create a [workspace configuration file](../how-to-configure-environment.md#local-and-dsvm-only-create-a-workspace-configuration-file). * Install the [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
machine-learning Samples Notebooks V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-notebooks-v1.md
This article shows you how to access the repositories from the following environ
## Option 1: Access on Azure Machine Learning compute instance (recommended)
-The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
+The easiest way to get started with the samples is to complete [Create resources you need to get started](../quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
To view example notebooks: 1. Sign in to [studio](https://ml.azure.com) and select your workspace if necessary.
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-hello-world.md
In this tutorial, you will:
## Prerequisites -- Complete [Quickstart: Set up your workspace to get started with Azure Machine Learning](../quickstart-create-resources.md) to create a workspace, compute instance, and compute cluster to use in this tutorial series.
+- Complete [Create resources you need to get started](../quickstart-create-resources.md) to create a workspace and compute instance to use in this tutorial series.
+- * [Create a cloud-based compute cluster](how-to-create-attach-compute-cluster.md#create). Name it 'cpu-cluster' to match the code in this tutorial.
## Create and run a Python script
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
If you don't have an Azure subscription, create a free account before you begin.
## Prerequisites
-* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
+* Complete [Create resources to get started](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
* A Python environment in which you've installed both the `azureml-core` and `azureml-pipeline` packages. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training. > [!Important]
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
* **Resource Group**- Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, see [Azure Resource Group](../azure-resource-manager/management/overview.md) overview article. * **Cluster name** - Enter a name for your cluster. * **Location** - Location where your cluster will be deployed to.
+ * **Cassandra version** - Version of Apache Cassandra that will be deployed
+ * **Extention** - Extensions that will be added, including [Cassandra Lucene Index](search-lucene-index.md).
* **Initial Cassandra admin password** - Password that is used to create the cluster. * **Confirm Cassandra admin password** - Reenter your password. * **Virtual Network** - Select an Exiting Virtual Network and Subnet, or create a new one.
The service allows update to Cassandra YAML configuration on a datacenter via th
> - cdc_raw_directory > - saved_caches_directory
+## De-allocate cluster
+
+1. For non-production environments, you can pause/de-allocate resources in the cluster in order to avoid being charged for them (you will continue to be charged for storage). First change cluster type to `NonProduction`, then `deallocate`.
+
+> [!WARNING]
+> Do not execute any schema or write operations during de-allocation - this can lead to data loss and in rare cases schema corruption requiring manual intervention from the support team.
+
+ :::image type="content" source="./media/create-cluster-portal/pause-cluster.png" alt-text="Screenshot of pausing a cluster." lightbox="./media/create-cluster-portal/pause-cluster.png" border="true":::
+ ## Troubleshooting If you encounter an error when applying permissions to your Virtual Network using Azure CLI, such as *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*, you can apply the same permission manually from the Azure portal. Learn how to do this [here](add-service-principal.md).
managed-instance-apache-cassandra Search Lucene Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/search-lucene-index.md
+
+ Title: Quickstart - Search Azure Managed Instance for Apache Cassandra using Stratio's Cassandra Lucene Index.
+description: This quickstart shows how to search Azure Managed Instance for Apache Cassandra cluster using Stratio's Cassandra Lucene Index.
++++ Last updated : 04/17/2023+
+# Quickstart: Search Azure Managed Instance for Apache Cassandra using Lucene Index (Preview)
+
+Cassandra Lucene Index, derived from Stratio Cassandra, is a plugin for Apache Cassandra that extends its index functionality to provide full text search capabilities and free multivariable, geospatial and bitemporal search. It is achieved through an Apache Lucene based implementation of Cassandra secondary indexes, where each node of the cluster indexes its own data. This quickstart demonstrates how to search Azure Managed Instance for Apache Cassandra using Lucene Index.
+
+> [!IMPORTANT]
+> Lucene Index is in public preview.
+> This feature is provided without a service level agreement, and it's not recommended for production workloads.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+> [!WARNING]
+> A limitation with the Lucene index plugin is that cross partition searches cannot be executed solely in the index - Cassandra needs to send the query to each node. This can lead to issues with performance (memory and CPU load) for cross partition searches that may affect steady state workloads.
+>
+> Where search requirements are significant, we recommend deploying a dedicated secondary data center to be used only for searches, with a minimal number of nodes, each having a high number of cores (minimum 16). The keyspaces in your primary (operational) data center should then be configured to replicate data to your secondary (search) data center.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Deploy an Azure Managed Instance for Apache Cassandra cluster. You can do this via the [portal](create-cluster-portal.md) - Lucene indexes will be enabled by default when clusters are deployed from the portal. If you want to add Lucene indexes to an existing cluster, click `Update` in the portal overview blade, select `Cassandra Lucene Index`, and click update to deploy.
+
+ :::image type="content" source="./media/search-lucene-index/update-cluster.png" alt-text="Screenshot of Update Cassandra Cluster Properties." lightbox="./media/search-lucene-index/update-cluster.png" border="true":::
+
+- Connect to your cluster from [CQLSH](create-cluster-portal.md#connecting-from-cqlsh).
+
+## Create data with Lucene Index
+
+1. In your `CQLSH` command window, create a keyspace and table as below:
+
+ ```SQL
+ CREATE KEYSPACE demo
+ WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'datacenter-1': 3};
+ USE demo;
+ CREATE TABLE tweets (
+ id INT PRIMARY KEY,
+ user TEXT,
+ body TEXT,
+ time TIMESTAMP,
+ latitude FLOAT,
+ longitude FLOAT
+ );
+ ```
+
+1. Now create a custom secondary index on the table using Lucene Index:
+
+ ```SQL
+ CREATE CUSTOM INDEX tweets_index ON tweets ()
+ USING 'com.stratio.cassandra.lucene.Index'
+ WITH OPTIONS = {
+ 'refresh_seconds': '1',
+ 'schema': '{
+ fields: {
+ id: {type: "integer"},
+ user: {type: "string"},
+ body: {type: "text", analyzer: "english"},
+ time: {type: "date", pattern: "yyyy/MM/dd"},
+ place: {type: "geo_point", latitude: "latitude", longitude: "longitude"}
+ }
+ }'
+ };
+ ```
+
+1. Insert the following sample tweets:
+
+ ```SQL
+ INSERT INTO tweets (id,user,body,time,latitude,longitude) VALUES (1,'theo','Make money fast, 5 easy tips', '2023-04-01T11:21:59.001+0000', 0.0, 0.0);
+ INSERT INTO tweets (id,user,body,time,latitude,longitude) VALUES (2,'theo','Click my link, like my stuff!', '2023-04-01T11:21:59.001+0000', 0.0, 0.0);
+ INSERT INTO tweets (id,user,body,time,latitude,longitude) VALUES (3,'quetzal','Click my link, like my stuff!', '2023-04-02T11:21:59.001+0000', 0.0, 0.0);
+ INSERT INTO tweets (id,user,body,time,latitude,longitude) VALUES (4,'quetzal','Click my link, like my stuff!', '2023-04-01T11:21:59.001+0000', 40.3930, -3.7328);
+ INSERT INTO tweets (id,user,body,time,latitude,longitude) VALUES (5,'quetzal','Click my link, like my stuff!', '2023-04-01T11:21:59.001+0000', 40.3930, -3.7329);
+ ```
+
+## Control read consistency
+
+1. The index you created earlier will index all the columns in the table with the specified types, and the read index used for searching will be refreshed once per second. Alternatively, you can explicitly refresh all the index shards with an empty search with consistency ALL:
+
+ ```SQL
+ CONSISTENCY ALL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{refresh:true}');
+ CONSISTENCY QUORUM
+ ```
+
+1. Now, you can search for tweets within a certain date range:
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{filter: {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"}}');
+ ```
+
+1. This search can also be performed by forcing an explicit refresh of the involved index shards:
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{
+ filter: {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"},
+ refresh: true
+ }') limit 100;
+ ```
+
+## Search data
+
+1. To search the top 100 more relevant tweets where body field contains the phrase ΓÇ£Click my linkΓÇ¥ within a particular date range:
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{
+ filter: {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"},
+ query: {type: "phrase", field: "body", value: "Click my link", slop: 1}
+ }') LIMIT 100;
+ ```
+
+1. To refine the search to get only the tweets written by users whose names start with "q":
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{
+ filter: [
+ {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"},
+ {type: "prefix", field: "user", value: "q"}
+ ],
+ query: {type: "phrase", field: "body", value: "Click my link", slop: 1}
+ }') LIMIT 100;
+ ```
+
+1. To get the 100 more recent filtered results you can use the sort option:
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{
+ filter: [
+ {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"},
+ {type: "prefix", field: "user", value: "q"}
+ ],
+ query: {type: "phrase", field: "body", value: "Click my link", slop: 1},
+ sort: {field: "time", reverse: true}
+ }') limit 100;
+ ```
+
+1. The previous search can be restricted to tweets created close to a geographical position:
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{
+ filter: [
+ {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"},
+ {type: "prefix", field: "user", value: "q"},
+ {type: "geo_distance", field: "place", latitude: 40.3930, longitude: -3.7328, max_distance: "1km"}
+ ],
+ query: {type: "phrase", field: "body", value: "Click my link", slop: 1},
+ sort: {field: "time", reverse: true}
+ }') limit 100;
+ ```
+
+1. It is also possible to sort the results by distance to a geographical position:
+
+ ```SQL
+ SELECT * FROM tweets WHERE expr(tweets_index, '{
+ filter: [
+ {type: "range", field: "time", lower: "2023/03/01", upper: "2023/05/01"},
+ {type: "prefix", field: "user", value: "q"},
+ {type: "geo_distance", field: "place", latitude: 40.3930, longitude: -3.7328, max_distance: "1km"}
+ ],
+ query: {type: "phrase", field: "body", value: "Click my link", slop: 1},
+ sort: [
+ {field: "time", reverse: true},
+ {field: "place", type: "geo_distance", latitude: 40.3930, longitude: -3.7328}
+ ]
+ }') limit 100;
+ ```
++
+## Next steps
+
+In this quickstart, you learned how to search an Azure Managed Instance for Apache Cassandra cluster using Lucene Search. You can now start working with the cluster:
+
+> [!div class="nextstepaction"]
+> [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
mariadb Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-data-in-replication.md
Previously updated : 06/24/2022 Last updated : 04/19/2023 # Configure Data-in Replication in Azure Database for MariaDB
The following steps prepare and configure the MariaDB server hosted on-premises,
1. Sign in to your Azure Database for MariaDB using a tool like MySQL command line. 2. Execute the below query.
- ```bash
- mysql> SELECT @@global.redirect_server_host;
+ ```sql
+ SELECT @@global.redirect_server_host;
``` Below is some sample output:
- ```bash
+ ```output
+--+ | @@global.redirect_server_host | +--+
The following steps prepare and configure the MariaDB server hosted on-premises,
3. Exit from the MySQL command line. 4. Execute the below in the ping utility to get the IP address.
- ```bash
+ ```terminal
ping <output of step 2b> ``` For example:
- ```bash
+ ```terminal
C:\Users\testuser> ping e299ae56f000.tr1830.westus1-a.worker.database.windows.net Pinging tr1830.westus1-a.worker.database.windows.net (**11.11.111.111**) 56(84) bytes of data. ```
mariadb Howto Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-migrate-dump-restore.md
Previously updated : 06/24/2022 Last updated : 04/19/2023 # Migrate your MariaDB database to an Azure database for MariaDB by using dump and restore This article explains two common ways to back up and restore databases in your Azure database for MariaDB:-- Dump and restore by using a command-line tool (using mysqldump) -- Dump and restore using phpMyAdmin+
+- Dump and restore by using a command-line tool (using mysqldump).
+- Dump and restore using phpMyAdmin.
## Prerequisites Before you begin migrating your database, do the following:+ - Create an [Azure Database for MariaDB server - Azure portal](quickstart-create-mariadb-server-database-using-azure-portal.md). - Install the [mysqldump](https://mariadb.com/kb/en/library/mysqldump/) command-line utility. - Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for running dump and restore commands.
Use common utilities and tools such as MySQL Workbench or mysqldump to remotely
You can use MySQL utilities such as mysqldump and mysqlpump to dump and load databases into an Azure database for MariaDB server in several common scenarios. -- Use database dumps when you're migrating an entire database. This recommendation holds when you're moving a large amount of data, or when you want to minimize service interruption for live sites or applications. -- Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MariaDB. Azure Database for MariaDB supports only the InnoDB storage engine, and no other storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before you migrate them to your Azure database for MariaDB.
+- Use database dumps when you're migrating an entire database. This recommendation holds when you're moving a large amount of data, or when you want to minimize service interruption for live sites or applications.
+- Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MariaDB. Azure Database for MariaDB supports only the InnoDB storage engine, and no other storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before you migrate them to your Azure database for MariaDB.
For example, if you have a WordPress app or a web app that uses MyISAM tables, first convert those tables by migrating them into InnoDB format before you restore them to your Azure database for MariaDB. Use the clause `ENGINE=InnoDB` to set the engine to use for creating a new table, and then transfer the data into the compatible table before you restore it. ```sql INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns ```+ - To avoid any compatibility issues when you're dumping databases, ensure that you're using the same version of MariaDB on the source and destination systems. For example, if your existing MariaDB server is version 10.2, you should migrate to your Azure database for MariaDB that's configured to run version 10.2. The `mysql_upgrade` command doesn't function in an Azure Database for MariaDB server, and it isn't supported. If you need to upgrade across MariaDB versions, first dump or export your earlier-version database into a later version of MariaDB in your own environment. You can then run `mysql_upgrade` before you try migrating into your Azure database for MariaDB. ## Performance considerations To optimize performance when you're dumping large databases, keep in mind the following considerations:-- Use the `exclude-triggers` option in mysqldump. Exclude triggers from dump files to avoid having the trigger commands fire during the data restore. -- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and send a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during the restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive. This is because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option. -- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This approach results in a smaller dump file and speeds up inserts when the file is reloaded.-- Use the `order-by-primary` option in mysqldump when you're dumping databases, so that the data is scripted in primary key order.-- Use the `disable-keys` option in mysqldump when you're dumping data, to disable foreign key constraints before the load. Disabling foreign key checks helps improve performance. Enable the constraints and verify the data after the load to ensure referential integrity.-- Use partitioned tables when appropriate.-- Load data in parallel. Avoid too much parallelism, which could cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal. -- Use the `defer-table-indexes` option in mysqlpump when you're dumping databases, so that index creation happens after table data is loaded.+
+- Use the `exclude-triggers` option in mysqldump. Exclude triggers from dump files to avoid having the trigger commands fire during the data restore.
+- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and send a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during the restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive. This is because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.
+- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This approach results in a smaller dump file and speeds up inserts when the file is reloaded.
+- Use the `order-by-primary` option in mysqldump when you're dumping databases, so that the data is scripted in primary key order.
+- Use the `disable-keys` option in mysqldump when you're dumping data, to disable foreign key constraints before the load. Disabling foreign key checks helps improve performance. Enable the constraints and verify the data after the load to ensure referential integrity.
+- Use partitioned tables when appropriate.
+- Load data in parallel. Avoid too much parallelism, which could cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.
+- Use the `defer-table-indexes` option in mysqlpump when you're dumping databases, so that index creation happens after table data is loaded.
- Copy the backup files to an Azure blob store and perform the restore from there. This approach should be a lot faster than performing the restore across the internet. ## Create a backup file
To optimize performance when you're dumping large databases, keep in mind the fo
To back up an existing MariaDB database on the local on-premises server or in a virtual machine, run the following command by using mysqldump: ```bash
-$ mysqldump --opt -u <uname> -p<pass> <dbname> > <backupfile.sql>
+mysqldump --opt -u <uname> -p<pass> <dbname> > <backupfile.sql>
``` The parameters to provide are:+ - *\<uname>*: Your database user name - *\<pass>*: The password for your database (note that there is no space between -p and the password) - *\<dbname>*: The name of your database
The parameters to provide are:
For example, to back up a database named *testdb* on your MariaDB server with the user name *testuser* and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database. ```bash
-$ mysqldump -u root -p testdb > testdb_backup.sql
+mysqldump -u root -p testdb > testdb_backup.sql
```+ To select specific tables to back up in your database, list the table names, separated by spaces. For example, to back up only table1 and table2 tables from the *testdb*, follow this example: ```bash
-$ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
+mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
``` To back up more than one database at once, use the --database switch and list the database names, separated by spaces. ```bash
-$ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
+mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
``` ## Create a database on the target server
mysql -h <hostname> -u <uname> -p<pass> <db_to_restore> < <backupfile.sql>
In this example, you restore the data into the newly created database on the target Azure Database for MariaDB server. ```bash
-$ mysql -h mydemoserver.mariadb.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
+mysql -h mydemoserver.mariadb.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
``` ## Export your MariaDB database by using phpMyAdmin To export, you can use the common tool phpMyAdmin, which might already be installed locally in your environment. To export your MariaDB database, do the following:+ 1. Open phpMyAdmin. 1. On the left pane, select your database, and then select the **Export** link. A new page appears to view the dump of database.
-1. In the **Export** area, select the **Select All** link to choose the tables in your database.
-1. In the **SQL options** area, select the appropriate options.
+1. In the **Export** area, select the **Select All** link to choose the tables in your database.
+1. In the **SQL options** area, select the appropriate options.
1. Select the **Save as file** option and the corresponding compression option, and then select **Go**. At the prompt, save the file locally. ## Import your database by using phpMyAdmin The importing process is similar to the exporting process. Do the following:
-1. Open phpMyAdmin.
-1. On the phpMyAdmin setup page, select **Add** to add your Azure Database for MariaDB server.
+
+1. Open phpMyAdmin.
+1. On the phpMyAdmin setup page, select **Add** to add your Azure Database for MariaDB server.
1. Enter the connection details and login information.
-1. Create an appropriately named database, and then select it on the left pane. To rewrite the existing database, select the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
-1. Select the **SQL** link to show the page where you can enter SQL commands or upload your SQL file.
-1. Select the **browse** button to find the database file.
+1. Create an appropriately named database, and then select it on the left pane. To rewrite the existing database, select the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
+1. Select the **SQL** link to show the page where you can enter SQL commands or upload your SQL file.
+1. Select the **browse** button to find the database file.
1. Select the **Go** button to export the backup, execute the SQL commands, and re-create your database. ## Next steps
mariadb Howto Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-redirection.md
Title: Connect with redirection - Azure Database for MariaDB
-description: This article describes how you can configure you application to connect to Azure Database for MariaDB with redirection.
+description: This article describes how you can configure your application to connect to Azure Database for MariaDB with redirection.
Previously updated : 06/24/2022 Last updated : 04/19/2023 # Connect to Azure Database for MariaDB with redirection
On your Azure Database for MariaDB server, configure the `redirect_enabled` para
Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft.
-The mysqlnd_azure extension is available to add to PHP applications through PECL and it is highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure).
+The mysqlnd_azure extension is available to add to PHP applications through PECL and it's highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure).
> [!IMPORTANT] > Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview.
The mysqlnd_azure extension is available to add to PHP applications through PECL
### Redirection logic >[!IMPORTANT]
-> Redirection logic/behavior beginning version 1.1.0 was updated and **it is recommended to use version 1.1.0+**.
+> Redirection logic/behavior beginning version 1.1.0 was updated and **it's recommended to use version 1.1.0+**.
The redirection behavior is determined by the value of `mysqlnd_azure.enableRedirect`. The table below outlines the behavior of redirection based on the value of this parameter beginning in **version 1.1.0+**.
-If you are using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below).
+If you're using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below).
|**mysqlnd_azure.enableRedirect value**| **Behavior**| |-|-|
-|`off` or `0`|Redirection will not be used. |
-|`on` or `1`|- If the connection does not use SSL on the driver side, no connection will be made. The following error will be returned: *"mysqlnd_azure.enableRedirect is on, but SSL option is not set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection is not supported on the server, the first connection is aborted and the following error is returned: *"Connection aborted because redirection is not enabled on the MariaDB server or the network package doesn't meet redirection protocol."*<br>- If the MariaDB server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.|
-|`preferred` or `2`<br> (default value)|- mysqlnd_azure will use redirection if possible.<br>- If the connection does not use SSL on the driver side, the server does not support redirection, or the redirected connection fails to connect for any non-fatal reason while the proxy connection is still a valid one, it will fall back to the first proxy connection.|
+|`off` or `0`|Redirection isn't be used. |
+|`on` or `1`|- If the connection doesn't use SSL on the driver side, no connection is made. The following error is returned: *"mysqlnd_azure.enableRedirect is on, but SSL option isn't set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection isn't supported on the server, the first connection gets aborted. The following error is returned: *"Connection aborted because redirection isn't enabled on the MariaDB server or the network package doesn't meet redirection protocol."*<br>- If the MariaDB server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.|
+|`preferred` or `2`<br> (default value)|- mysqlnd_azure uses redirection if possible.<br>- If the connection doesn't use SSL on the driver side, the server doesn't support redirection, or the redirected connection fails to connect for any nonfatal reason while the proxy connection is still a valid one, it falls back to the first proxy connection.|
-The subsequent sections of the document will outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter.
+The subsequent sections of the document outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter.
-### Ubuntu Linux
+### [Ubuntu Linux](#tab/linux)
-#### Prerequisites
+**Prerequisites**
- PHP versions 7.2.15+ and 7.3.2+ - PHP PEAR - php-mysql - Azure Database for MariaDB server
-1. Install [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) with [PECL](https://pecl.php.net/package/mysqlnd_azure). It is recommended to use version 1.1.0+.
+1. Install [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) with [PECL](https://pecl.php.net/package/mysqlnd_azure). It's recommended to use version 1.1.0+.
```bash sudo pecl install mysqlnd_azure
The subsequent sections of the document will outline how to install the `mysqlnd
2. Locate the extension directory (`extension_dir`) by running the below: ```bash
- php -i | grep "extension_dir"
+ sudo php -i | grep "extension_dir"
``` 3. Change directories to the returned folder and ensure `mysqlnd_azure.so` is located in this folder.
The subsequent sections of the document will outline how to install the `mysqlnd
4. Locate the folder for .ini files by running the below: ```bash
- php -i | grep "dir for additional .ini files"
+ sudo hp -i | grep "dir for additional .ini files"
``` 5. Change directories to this returned folder.
-6. Create a new .ini file for `mysqlnd_azure`. Make sure the alphabet order of the name is after that of mysqnld, since the modules are loaded according to the name order of the ini files. For example, if `mysqlnd` .ini is named `10-mysqlnd.ini`, name the mysqlnd ini as `20-mysqlnd-azure.ini`.
+6. Create a new `.ini` file for `mysqlnd_azure`. Make sure the alphabet order of the name is after the `mysqnld` one, since the modules are loaded according to the name order of the ini files. For example, if `mysqlnd` .ini is named `10-mysqlnd.ini`, name the mysqlnd ini as `20-mysqlnd-azure.ini`.
-7. Within the new .ini file, add the following lines to enable redirection.
+7. Within the new `.ini` file, add the following lines to enable redirection.
- ```bash
+ ```config
extension=mysqlnd_azure mysqlnd_azure.enableRedirect = on/off/preferred ```
-### Windows
+### [Windows](#tab/windows)
-#### Prerequisites
+**Prerequisites**
- PHP versions 7.2.15+ and 7.3.2+ - php-mysql - Azure Database for MariaDB server
-1. Determine if you are running a x64 or x86 version of PHP by running the following command:
+1. Determine if you're running a x64 or x86 version of PHP by running the following command:
```cmd php -i | findstr "Thread" ```
-2. Download the corresponding x64 or x86 version of the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) DLL from [PECL](https://pecl.php.net/package/mysqlnd_azure) that matches your version of PHP. It is recommended to use version 1.1.0+.
+2. Download the corresponding x64 or x86 version of the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) DLL from [PECL](https://pecl.php.net/package/mysqlnd_azure) that matches your version of PHP. It's recommended to use version 1.1.0+.
3. Extract the zip file and find the DLL named `php_mysqlnd_azure.dll`.
The subsequent sections of the document will outline how to install the `mysqlnd
7. Modify the `php.ini` file and add the following extra lines to enable redirection.
- Under the Dynamic Extensions section:
- ```cmd
+ Under the Dynamic Extensions section:
+
+ ```config
extension=mysqlnd_azure ```
- Under the Module Settings section:
- ```cmd
+ Under the Module Settings section:
+
+ ```cpnfig
[mysqlnd_azure] mysqlnd_azure.enableRedirect = on/off/preferred ``` ++ ### Confirm redirection You can also confirm redirection is configured with the below sample PHP code. Create a PHP file called `mysqlConnect.php` and paste the below code. Update the server name, username, and password with your own.
$db_name = 'testdb';
die ('Connect error (' . mysqli_connect_errno() . '): ' . mysqli_connect_error() . "\n"); } else {
- echo $db->host_info, "\n"; //if redirection succeeds, the host_info will differ from the hostname you used used to connect
+ echo $db->host_info, "\n"; //if redirection succeeds, the host_info differs from the hostname you used used to connect
$res = $db->query('SHOW TABLES;'); //test query with the connection print_r ($res); $db->close();
migrate Troubleshoot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-discovery.md
ms. Previously updated : 10/17/2022 Last updated : 04/20/2023
Typical SQL discovery errors are summarized in the following table.
## Common web apps discovery errors
-Azure Migrate supports discovery of ASP.NET web apps running on on-premises machines by using Azure Migrate: Discovery and assessment. Web apps discovery is currently supported for VMware only. See the [Discovery](tutorial-discover-vmware.md) tutorial to get started.
+Azure Migrate supports discovery of web apps running on on-premises machines by using Azure Migrate: Discovery and assessment. See the [Discovery](tutorial-discover-vmware.md) tutorial to get started.
Typical web apps discovery errors are summarized in the following table.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Previously updated : 04/03/2023 Last updated : 04/19/2023
Currently, these Azure services don't support NSG flow logs:
- [Azure Logic Apps](../logic-apps/logic-apps-overview.md) - [Azure Functions](../azure-functions/functions-overview.md) - [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md)
+- [App Service](../app-service/overview.md)
+- [Azure Database for MariaDB](../mariadb/overview.md)
+- [Azure Database for MySQL](../mysql/single-server/overview.md)
+- [Azure Database for PostgreSQL](../postgresql/single-server/overview.md)
> [!NOTE] > App services deployed under an Azure App Service plan don't support NSG flow logs. To learn more, see [How virtual network integration works](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
The following table lists the fields in the schema and what they signify.
| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows. | | DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows. | | VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
-| PublicIP_s | Public IP addresses | Used for AzurePublic and ExternalPublic flows. |
| DestPort_d | Destination Port | Port at which traffic is incoming. | | L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP. | | L7Protocol_s | Protocol Name | Derived from destination port. |
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
# Azure networking services overview The networking services in Azure provide a variety of networking capabilities that can be used together or separately. Select any of the following key capabilities to learn more about them:-- [**Connectivity services**](#connect): Connect Azure resources and on-premises resources using any or a combination of these networking services in Azure - Virtual Network (VNet), Virtual WAN, ExpressRoute, VPN Gateway, Virtual network NAT Gateway, Azure DNS, Peering service, Route Server, and Azure Bastion.
+- [**Connectivity services**](#connect): Connect Azure resources and on-premises resources using any or a combination of these networking services in Azure - Virtual Network (VNet), Virtual WAN, ExpressRoute, VPN Gateway, Virtual network NAT Gateway, Azure DNS, Peering service, Azure Virtual Network Manager, Route Server, and Azure Bastion.
- [**Application protection services**](#protect): Protect your applications using any or a combination of these networking services in Azure - Load Balancer, Private Link, DDoS protection, Firewall, Network Security Groups, Web Application Firewall, and Virtual Network Endpoints. - [**Application delivery services**](#deliver): Deliver applications in the Azure network using any or a combination of these networking services in Azure - Content Delivery Network (CDN), Azure Front Door Service, Traffic Manager, Application Gateway, Internet Analyzer, and Load Balancer. - [**Network monitoring**](#monitor): Monitor your network resources using any or a combination of these networking services in Azure - Network Watcher, ExpressRoute Monitor, Azure Monitor, or VNet Terminal Access Point (TAP).
This section describes services that provide connectivity between Azure resource
### <a name="vnet"></a>Virtual network Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. You can use VNets to: - **Communicate between Azure resources**: You can deploy virtual machines, and several other types of Azure resources to a virtual network, such as Azure App Service Environments, the Azure Kubernetes Service (AKS), and Azure Virtual Machine Scale Sets. To view a complete list of Azure resources that you can deploy into a virtual network, see [Virtual network service integration](../../virtual-network/virtual-network-for-azure-services.md).-- **Communicate between each other**: You can connect virtual networks to each other, enabling resources in either virtual network to communicate with each other, using virtual network peering. The virtual networks you connect can be in the same, or different, Azure regions. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md).
+- **Communicate between each other**: You can connect virtual networks to each other, enabling resources in either virtual network to communicate with each other, using virtual network peering or Azure Virtual Network Manager. The virtual networks you connect can be in the same, or different, Azure regions. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md) and [Azure Virtual Network Manager](../../virtual-network-manager/overview.md).
- **Communicate to the internet**: All resources in a VNet can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public Load Balancer. You can also use [Public IP addresses](../../virtual-network/ip-services/virtual-network-public-ip-address.md) or public [Load Balancer](../../load-balancer/load-balancer-overview.md) to manage your outbound connections. - **Communicate with on-premises networks**: You can connect your on-premises computers and networks to a virtual network using [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../../expressroute/expressroute-introduction.md).
For more information, see [What is virtual network NAT gateway?](../../virtual-n
:::image type="content" source="./media/networking-overview/flow-map.png" alt-text="Virtual network NAT gateway":::
+### <a name="avnm"></a>Azure Virtual Network Manager
+
+Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define network groups to identify and logically segment your virtual networks. Then you can determine the connectivity and security configurations you want and apply them across all the selected virtual networks in network groups at once. For more information, see [What is Azure Virtual Network Manager?](../../virtual-network-manager/overview.md).
+ ### <a name="routeserver"></a>Route Server Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNet) without the need to manually configure or maintain route tables. For more information, see [What is Azure Route Server?](../../route-server/overview.md)
openshift Concepts Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-ovn-kubernetes.md
+
+ Title: Overview of OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters
+description: Overview of OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
++++ Last updated : 04/17/2023
+topic: how-to
+keywords: azure, openshift, aro, red hat, azure CLI, azure portal, ovn, ovn-kubernetes, CNI, Container Network Interface
+#Customer intent: I need to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
++
+# OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters
+
+The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plug-in is a network provider for the default cluster network. OVN-Kubernetes, which is based on the Open Virtual Network (OVN), provides an overlay-based networking implementation.
+
+A cluster that uses the OVN-Kubernetes network provider also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
+
+## OVN-Kubernetes features
+
+The OVN-Kubernetes CNI cluster network provider offers the following features:
+
+* Uses OVN to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution.
+* Implements Kubernetes network policy support, including ingress and egress rules.
+* Uses the Generic Network Virtualization Encapsulation (Geneve) protocol rather than the Virtual Extensible LAN (VXLAN) protocol to create an overlay network between nodes.
+
+> [!NOTE]
+> As of ARO 4.11, OVN-Kubernetes is the CNI for all ARO clusters. In already existing clusters, migrating from the previous SDN standard to OVN is not supported.
+
+For more information about OVN-Kubernetes CNI network provider, see [About the OVN-Kubernetes default Container Network Interface (CNI) network provider](https://docs.openshift.com/container-platform/latest/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html).
+
+## Recommended content
+
+[Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md)
openshift Howto Configure Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-configure-ovn-kubernetes.md
- Title: Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview)
-description: In this how-to article, learn how to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview).
---- Previously updated : 06/13/2022
-topic: how-to
-keywords: azure, openshift, aro, red hat, azure CLI, azure portal, ovn, ovn-kubernetes, CNI, Container Network Interface
-Customer intent: I need to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
--
-# Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview)
-
-This article explains how to Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
-
-## About the OVN-Kubernetes default Container Network Interface (CNI) network provider
-
-OVN-Kubernetes Container Network Interface (CNI) for Azure Red Hat OpenShift cluster is now available for preview.
-
-The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plug-in is a network provider for the default cluster network. OVN-Kubernetes, which is based on the Open Virtual Network (OVN), provides an overlay-based networking implementation.
-
-A cluster that uses the OVN-Kubernetes network provider also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
-
-> [!IMPORTANT]
-> Currently, this Azure Red Hat OpenShift feature is being offered in preview only. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Azure Red Hat OpenShift previews are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
-
-## OVN-Kubernetes features
-
-The OVN-Kubernetes CNI cluster network provider offers the following features:
-
-* Uses OVN to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution.
-* Implements Kubernetes network policy support, including ingress and egress rules.
-* Uses the Generic Network Virtualization Encapsulation (Geneve) protocol rather than the Virtual Extensible LAN (VXLAN) protocol to create an overlay network between nodes.
-
-For more information about OVN-Kubernetes CNI network provider, see [About the OVN-Kubernetes default Container Network Interface (CNI) network provider](https://docs.openshift.com/container-platform/4.10/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html).
-
-## Prerequisites
-
-Complete the following prerequisites.
-### Install and use the preview Azure Command-Line Interface (CLI)
-
-> [!NOTE]
-> The Azure CLI extension is required for the preview feature only.
-
-If you choose to install and use the CLI locally, ensure you're running Azure CLI version 2.37.0 or later. Run `az --version` to find the version. For details on installing or upgrading Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-1. Use the following URL to download both the Python wheel and the CLI extension:
-
- [https://aka.ms/az-aroext-latest.whl](https://aka.ms/az-aroext-latest.whl)
-
-2. Run the following command:
-
-```azurecli-interactive
-az extension add --upgrade -s <path to downloaded .whl file>
-```
-
-3. Verify the CLI extension is being used:
-
-```azurecli-interactive
-az extension list
-[
- {
- "experimental": false,
- "extensionType": "whl",
- "name": "aro",
- "path": "<path may differ depending on system>",
- "preview": true,
- "version": "1.0.6"
- }
-]
-```
-
-4. Run the following command:
-
-```azurecli-interactive
-az aro create --help
-```
-
-The result should show the `ΓÇôsdn-type` option, as follows:
-
-```json
sdn-type --software-defined-network-type : SDN type either "OpenShiftSDN" (default) or "OVNKubernetes". Allowed values: OVNKubernetes, OpenShiftSDN
-```
-
-## Create an Azure Red Hat OpenShift cluster with OVN as the network provider
-
-The process to create an Azure Red Hat OpenShift cluster with OVN is exactly the same as the existing process explained in [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md), with the following exception. You must also pass in the SDN type of `OVNKubernetes` in step 4 below.
-
-The following high-level procedure outlines the steps to create an Azure Red Hat OpenShift cluster with OVN as the network provider:
-
-1. Verify your permissions.
-2. Register the resource providers.
-3. Create a virtual network containing two empty subnets.
-4. Create an Azure Red Hat OpenShift cluster by using OVN CNI network provider.
-5. Verify the Azure Red Hat OpenShift cluster is using OVN CNI network provider.
-
-## Verify your permissions
-
-Using OVN CNI network provider for Azure Red Hat OpenShift clusters requires you to create a resource group, which will contain the virtual network for the cluster. You must have either Contributor and User Access Administrator permissions or have Owner permissions either directly on the virtual network or on the resource group or subscription containing it.
-
-You'll also need sufficient Azure Active Directory permissions (either a member user of the tenant, or a guest user assigned with role Application administrator) for the tooling to create an application and service principal on your behalf for the cluster. For more information about user roles, see [Member and guest users](../active-directory/fundamentals/users-default-permissions.md#member-and-guest-users) and [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
-
-## Register the resource providers
-
-If you have multiple Azure subscriptions, you must register the resource providers. For information about the registration procedure, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
-
-## Create a virtual network containing two empty subnets
-
-If you have an existing virtual network that meets your needs, you can skip this step. To know the procedure of creating a virtual network, see [Create a virtual network containing two empty subnets](tutorial-create-cluster.md#create-a-virtual-network-containing-two-empty-subnets).
-
-## Create an Azure Red Hat OpenShift cluster by using OVN-Kubernetes CNI network provider
-
-Run the following command to create an Azure Red Hat OpenShift cluster that uses the OVN CNI network provider:
-
-```
-az aro create --resource-group $RESOURCEGROUP \
- --name $CLUSTER \
- --vnet aro-vnet \
- --master-subnet master-subnet \
- --worker-subnet worker-subnet \
- --sdn-type OVNKubernetes \
- --pull-secret @pull-secret.txt
-```
-
-## Verify an Azure Red Hat OpenShift cluster is using the OVN CNI network provider
-
-After the cluster is successfully configured to use the OVN CNI network provider, sign in to your account and run the following command:
-
-```
-oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
-```
-
-The value of `status.networkType` must be `OVNKubernetes`.
-
-## Recommended content
-
-[Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md)
openshift Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-portal.md
Previously updated : 02/14/2023 Last updated : 04/19/2023
Register the `Microsoft.RedHatOpenShift` resource provider. For instructions on
## Create an Azure Red Hat OpenShift cluster 1. On the Azure portal menu or from the **Home** page, select **All Services** under three horizontal bars on the top left hand page.
-2. Select **Containers** > **Azure Red Hat OpenShift**.
-3. On the **Basics** tab, configure the following options:
+2. Search for and select **Azure Red Hat OpenShift clusters**.
+3. Select **Create**.
+4. On the **Basics** tab, configure the following options:
* **Project details**:
- * Select an **Azure Subscription**.
- * Select or create an **Azure Resource group**, such as *myResourceGroup*.
- * **Cluster details**:
+ * Select an Azure **Subscription**.
+ * Select or create an Azure **Resource group**, such as *myResourceGroup*.
+ * **Instance details**:
* Select a **Region** for the Azure Red Hat OpenShift cluster.
- * Enter a OpenShift **cluster name**, such as *myAROCluster*.
- * Enter **Domain name**.
+ * Enter an **OpenShift cluster name**, such as *myAROCluster*.
+ * Enter a **Domain name**.
* Select **Master VM Size** and **Worker VM Size**.
+ * Select **Worker node count** (i.e., the number of worker nodes to create).
- ![**Basics** tab on Azure portal](./media/Basics.png)
+ > [!div class="mx-imgBorder"]
+ > [ ![**Basics** tab on Azure portal](./media/Basics.png) ](./media/Basics.png#lightbox)
> [!NOTE]
- > In the **Domain name** field, you can either specify a domain name (e.g., *example.com*) or a prefix (e.g., *abc*) that will be used as part of the auto-generated DNS name for OpenShift console and API servers. This prefix is also used as part of the name of the resource group (e.g., *aro-abc*) that is created to host the cluster VMs.
+ > The **Domain name** field is pre-populated with a random string. You can either specify a domain name (e.g., *example.com*) or a string/prefix (e.g., *abc*) that will be used as part of the auto-generated DNS name for OpenShift console and API servers. This prefix is also used as part of the name of the resource group that is created to host the cluster VMs if a resource group name is not specified.
-4. On the **Authentication** tab of the **Azure Red Hat OpenShift** dialog, complete the following sections.
+5. On the **Authentication** tab, complete the following sections.
- In the **Service principal information** section:
+ Under **Service principal information**, select either **Create new** or **Existing**. If you choose to use an existing service principal, enter the following information:
- **Service principal client ID** is your appId. - **Service principal client secret** is the service principal's decrypted Secret value.
- If you need to create a service principal, see [Creating and using a service principal with an Azure Red Hat OpenShift cluster](howto-create-service-principal.md).
+ > [!NOTE]
+ > If you need to create a service principal, see [Creating and using a service principal with an Azure Red Hat OpenShift cluster](howto-create-service-principal.md).
- In the **Cluster pull secret** section:
-
- - **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.
+ Under **Pull secret**, enter the **Red Hat pull secret** (i.e., your cluster's pull secret's decrypted value). If you don't have a pull secret, leave this field blank.
:::image type="content" source="./media/openshift-service-principal-portal.png" alt-text="Screenshot that shows how to use the Authentication tab with Azure portal to create a service principal." lightbox="./media/openshift-service-principal-portal.png":::
-5. On the **Networking** tab, which follows, make sure to configure the required options.
+6. On the **Networking** tab, configure the required options.
- **Note**: Azure Red Hat OpenShift clusters running OpenShift 4 require a virtual network with two empty subnets: one for the control plane and one for worker nodes.
+ **Note**: Azure Red Hat OpenShift clusters running OpenShift 4 require a virtual network with two empty subnets: one for the control plane and one for worker nodes.
-![**Networking** tab on Azure portal](./media/Networking.png)
+> [!div class="mx-imgBorder"]
+> [ ![**Networking** tab on Azure portal](./media/Networking.png) ](./media/Networking.png#lightbox)
-6. On the **Tags** tab, add tags to organize your resources.
+7. On the **Tags** tab, add tags to organize your resources.
-![**Tags** tab on Azure portal](./media/Tags.png)
+> [!div class="mx-imgBorder"]
+> [ ![**Tags** tab on Azure portal](./media/Tags.png) ](./media/Tags.png#lightbox)
-7. Check **Review + create** and then **Create** when validation completes.
+8. Check **Review + create** and then **Create** when validation completes.
![**Review + create** tab on Azure portal](./media/Review+Create.png)
-8. It takes approximately 35 to 45 minutes to create the Azure Red Hat OpenShift cluster. When your deployment is complete, navigate to your resource by either:
+9. It takes approximately 35 to 45 minutes to create the Azure Red Hat OpenShift cluster. When your deployment is complete, navigate to your resource by either:
* Clicking **Go to resource**, or * Browsing to the Azure Red Hat OpenShift cluster resource group and selecting the Azure Red Hat OpenShift resource.
- * Per example cluster dashboard below: browsing for *myResourceGroup* and selecting *myAROCluster* resource.
+ * Per example cluster dashboard below: browsing for *myResourceGroup* and selecting *myAROCluster* resource.
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
Expected output:
## show fabric ```azurecli
-az nf fabric show --resourcegroup "NFResourceGroupName" --resource-name "NFName"
+az nf fabric show --resource-group "NFResourceGroupName" --resource-name "NFName"
``` Expected output:
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
Title: List of Metrics Collected in Azure Operator Nexus. description: List of metrics collected in Azure Operator Nexus.---- Previously updated : 02/03/2023-++++ Last updated : 02/03/2023 #Required; mm/dd/yyyy format.+ # List of metrics collected in Azure Operator Nexus This section provides the list of metrics collected from the different components. -- [API server](#api-server)
+**Undercloud Kubernetes**
+- [kubernetes API server](#kubernetes-api-server)
+- [kubernetes Services](#kubernetes-services)
- [coreDNS](#coredns)-- [Containers](#containers) - [etcd](#etcd)-- [Felix](#felix)-- [Kubernetes Services](#kubernetes-services)
+- [calico-felix](#calico-felix)
+- [calico-typha](#calico-typha)
+- [containers](#kubernetes-containers)
+
+**Baremetal servers**
+- [node metrics](#node-metrics)
+
+**Virtual Machine orchestrator**
- [kubevirt](#kubevirt)-- [Node (servers)](#node-servers)-- [Pure Storage](#pure-storage)-- [Typha](#typha)
-## API server
+**Storage Appliance**
+- [pure storage](#pure-storage)
+
+## Undercloud Kubernetes
+### ***Kubernetes API server***
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-|-|-|--|-|-||-|
+|-|:-:|:--:|:-:|-|::|:-:|
| apiserver_audit_requests_rejected_total | Apiserver | Count | Average | Counter of apiserver requests rejected due to an error in audit logging backend. | Cluster, Node | Yes | | apiserver_client_certificate_expiration_seconds_sum | Apiserver | Second | Sum | Distribution of the remaining lifetime on the certificate used to authenticate a request. | Cluster, Node | Yes | | apiserver_storage_data_key_generation_failures_total | Apiserver | Count | Average | Total number of failed data encryption key(DEK) generation operations. | Cluster, Node | Yes | | apiserver_tls_handshake_errors_total | Apiserver | Count | Average | Number of requests dropped with 'TLS handshake error from' error | Cluster, Node | Yes |
-## coreDNS
+### ***Kubernetes services***
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
-| coredns_dns_requests_total | DNS Requests | Count | Average | total query count | Cluster, Node, Protocol | Yes |
-| coredns_dns_responses_total | DNS response/errors | Count | Average | response per zone, rcode and plugin. | Cluster, Node, Rcode | Yes |
-| coredns_health_request_failures_total | DNS Health Request Failures | Count | Average | The number of times the internal health check loop failed to query | Cluster, Node | Yes |
-| coredns_panics_total | DNS panic | Count | Average | total number of panics | Cluster, Node | Yes |
-
-## Containers
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
-| container_fs_io_time_seconds_total | Containers - Filesystem | Second | Average | Cumulative count of seconds spent doing I/Os | Cluster, Node, Pod+Container+Interface | Yes |
-| container_memory_failcnt | Containers - Memory | Count | Average | Number of memory usage hits limits | Cluster, Node, Pod+Container+Interface | Yes |
-| container_memory_usage_bytes | Containers - Memory | Byte | Average | Current memory usage, including all memory regardless of when it was accessed | Cluster, Node, Pod+Container+Interface | Yes |
-| container_oom_events_total | Container OOM Events | Count | Average | Count of out of memory events observed for the container | Cluster, Node, Pod+Container | Yes |
-| container_start_time_seconds | Containers - Start Time | Second | Average | Start time of the container since unix epoch | Cluster, Node, Pod+Container+Interface | Yes |
-| container_tasks_state | Containers - Task state | Labels | Average | Number of tasks in given state | Cluster, Node, Pod+Container+Interface, State | Yes |
-
-## etcd
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
-| etcd_disk_backend_commit_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of commit called by backend. | Cluster, Pod | Yes |
-| etcd_disk_wal_fsync_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of fsync called by wal | Cluster, Pod | Yes |
-| etcd_server_is_leader | Etcd Server | Labels | Average | Whether node is leader | Cluster, Pod | Yes |
-| etcd_server_is_learner | Etcd Server | Labels | Average | Whether node is learner | Cluster, Pod | Yes |
-| etcd_server_leader_changes_seen_total | Etcd Server | Count | Average | The number of leader changes seen. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_committed_total | Etcd Server | Count | Average | The total number of consensus proposals committed. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_applied_total | Etcd Server | Count | Average | The total number of consensus proposals applied. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_failed_total | Etcd Server | Count | Average | The total number of failed proposals seen. | Cluster, Pod, Tier | Yes |
-
-## Felix
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
-| felix_ipsets_calico | Felix | Count | Average | Number of active Calico IP sets. | Cluster, Node | Yes |
-| felix_cluster_num_host_endpoints | Felix | Count | Average | Total number of host endpoints cluster-wide. | Cluster, Node | Yes |
-| felix_active_local_endpoints | Felix | Count | Average | Number of active endpoints on this host. | Cluster, Node | Yes |
-| felix_cluster_num_hosts | Felix | Count | Average | Total number of Calico hosts in the cluster. | Cluster, Node | Yes |
-| felix_cluster_num_workload_endpoints | Felix | Count | Average | Total number of workload endpoints cluster-wide. | Cluster, Node | Yes |
-| felix_int_dataplane_failures | Felix | Count | Average | Number of times dataplane updates failed and will be retried. | Cluster, Node | Yes |
-| felix_ipset_errors | Felix | Count | Average | Number of ipset command failures. | Cluster, Node | Yes |
-| felix_iptables_restore_errors | Felix | Count | Average | Number of iptables-restore errors. | Cluster, Node | Yes |
-| felix_iptables_save_errors | Felix | Count | Average | Number of iptables-save errors. | Cluster, Node | Yes |
-| felix_resyncs_started | Felix | Count | Average | Number of times Felix has started resyncing with the datastore. | Cluster, Node | Yes |
-| felix_resync_state | Felix | Count | Average | Current datastore state. | Cluster, Node | Yes |
-
-## Kubernetes services
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
+|-|:-:|:--:|:-:|-|::|:-:|
| kube_daemonset_status_current_number_scheduled | Kube Daemonset | Count | Average | Number of Daemonsets scheduled | Cluster | Yes | | kube_daemonset_status_desired_number_scheduled | Kube Daemonset | Count | Average | Number of daemoset replicas desired | Cluster | Yes | | kube_deployment_status_replicas_ready | Kube Deployment | Count | Average | Number of deployment replicas present | Cluster | Yes |
This section provides the list of metrics collected from the different component
| kubelet_volume_stats_capacity_bytes | Pods - Storage - Capacity | Byte | Average | Capacity in bytes of the volume | Cluster, Node, Persistent Volume Claim | Yes | | kubelet_volume_stats_used_bytes | Pods - Storage - Used | Byte | Average | Number of used bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
-## kubevirt
+### ***coreDNS***
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
-| kubevirt_info | Host | Labels | NA | Version information. | Cluster, Node | Yes |
-| kubevirt_virt_controller_leading | Kubevirt Controller | Labels | Average | Indication for an operating virt-controller. | Cluster, Pod | Yes |
-| kubevirt_virt_operator_ready | Kubevirt Operator | Labels | Average | Indication for a virt operator being ready | Cluster, Pod | Yes |
-| kubevirt_vmi_cpu_affinity | VM-CPU | Labels | Average | Details the cpu pinning map via boolean labels in the form of vcpu_X_cpu_Y. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_actual_balloon_bytes | VM-Memory | Byte | Average | Current balloon size in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_domain_total_bytes | VM-Memory | Byte | Average | The amount of memory in bytes allocated to the domain. The memory value in domain xml file | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_swap_in_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of data read from swap space of the guest in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_swap_out_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of memory written out to swap space of the guest in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_available_bytes | VM-Memory | Byte | Average | Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_unused_bytes | VM-Memory | Byte | Average | The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free | Cluster, Node, VM | Yes |
-| kubevirt_vmi_network_receive_packets_total | VM-Network | Count | Average | Total network traffic received packets. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_network_transmit_packets_total | VM-Network | Count | Average | Total network traffic transmitted packets. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_network_transmit_packets_dropped_total | VM-Network | Count | Average | The total number of tx packets dropped on vNIC interfaces. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_outdated_count | VMI | Count | Average | Indication for the total number of VirtualMachineInstance workloads that are not running within the most up-to-date version of the virt-launcher environment. | Cluster, Node, VM, Phase | Yes |
-| kubevirt_vmi_phase_count | VMI | Count | Average | Sum of VMIs per phase and node. | Cluster, Node, VM, Phase | Yes |
-| kubevirt_vmi_storage_iops_read_total | VM-Storage | Count | Average | Total number of I/O read operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_iops_write_total | VM-Storage | Count | Average | Total number of I/O write operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_read_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on read operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_write_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on write operations | Cluster, Node, VM, Drive | Yes |
-| kubevirt_virt_controller_ready | Kubevirt Controller | Labels | Average | Indication for a virt-controller that is ready to take the lead. | Cluster, Pod | Yes |
+|-|:-:|:--:|:-:|-|::|:-:|
+| coredns_dns_requests_total | DNS Requests | Count | Average | total query count | Cluster, Node, Protocol | Yes |
+| coredns_dns_responses_total | DNS response/errors | Count | Average | response per zone, rcode and plugin. | Cluster, Node, Rcode | Yes |
+| coredns_health_request_failures_total | DNS Health Request Failures | Count | Average | The number of times the internal health check loop failed to query | Cluster, Node | Yes |
+| coredns_panics_total | DNS panic | Count | Average | total number of panics | Cluster, Node | Yes |
+
+### ***etcd***
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|:-:|:--:|:-:|-|::|:-:|
+| etcd_disk_backend_commit_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of commit called by backend. | Cluster, Pod | Yes |
+| etcd_disk_wal_fsync_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of fsync called by wal | Cluster, Pod | Yes |
+| etcd_server_is_leader | Etcd Server | Labels | Average | Whether node is leader | Cluster, Pod | Yes |
+| etcd_server_is_learner | Etcd Server | Labels | Average | Whether node is learner | Cluster, Pod | Yes |
+| etcd_server_leader_changes_seen_total | Etcd Server | Count | Average | The number of leader changes seen. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_committed_total | Etcd Server | Count | Average | The total number of consensus proposals committed. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_applied_total | Etcd Server | Count | Average | The total number of consensus proposals applied. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_failed_total | Etcd Server | Count | Average | The total number of failed proposals seen. | Cluster, Pod, Tier | Yes |
-## Node (servers)
+### ***calico-felix***
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
+|-|:-:|:--:|:-:|-|::|:-:|
+| felix_ipsets_calico | Felix | Count | Average | Number of active Calico IP sets. | Cluster, Node | Yes |
+| felix_cluster_num_host_endpoints | Felix | Count | Average | Total number of host endpoints cluster-wide. | Cluster, Node | Yes |
+| felix_active_local_endpoints | Felix | Count | Average | Number of active endpoints on this host. | Cluster, Node | Yes |
+| felix_cluster_num_hosts | Felix | Count | Average | Total number of Calico hosts in the cluster. | Cluster, Node | Yes |
+| felix_cluster_num_workload_endpoints | Felix | Count | Average | Total number of workload endpoints cluster-wide. | Cluster, Node | Yes |
+| felix_int_dataplane_failures | Felix | Count | Average | Number of times dataplane updates failed and will be retried. | Cluster, Node | Yes |
+| felix_ipset_errors | Felix | Count | Average | Number of ipset command failures. | Cluster, Node | Yes |
+| felix_iptables_restore_errors | Felix | Count | Average | Number of iptables-restore errors. | Cluster, Node | Yes |
+| felix_iptables_save_errors | Felix | Count | Average | Number of iptables-save errors. | Cluster, Node | Yes |
+| felix_resyncs_started | Felix | Count | Average | Number of times Felix has started resyncing with the datastore. | Cluster, Node | Yes |
+| felix_resync_state | Felix | Count | Average | Current datastore state. | Cluster, Node | Yes |
+
+### ***calico-typha***
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
+|-|:-:|:--:|:-:|-|::|:-:|
+| typha_connections_accepted | Typha | Count | Average | Total number of connections accepted over time. | Cluster, Node | Yes |
+| typha_connections_dropped | Typha | Count | Average | Total number of connections dropped due to rebalancing. | Cluster, Node | Yes |
+| typha_ping_latency_count | Typha | Count | Average | Round-trip ping latency to client. | Cluster, Node | Yes |
++
+### ***Kubernetes containers***
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|:-:|:--:|:-:|-|::|:-:|
+| container_fs_io_time_seconds_total | Containers - Filesystem | Second | Average | Cumulative count of seconds spent doing I/Os | Cluster, Node, Pod+Container+Interface | Yes |
+| container_memory_failcnt | Containers - Memory | Count | Average | Number of memory usage hits limits | Cluster, Node, Pod+Container+Interface | Yes |
+| container_memory_usage_bytes | Containers - Memory | Byte | Average | Current memory usage, including all memory regardless of when it was accessed | Cluster, Node, Pod+Container+Interface | Yes |
+| container_tasks_state | Containers - Task state | Labels | Average | Number of tasks in given state | Cluster, Node, Pod+Container+Interface, State | Yes |
+
+## Baremetal servers
+### ***node metrics***
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|:-:|:--:|:-:|-|::|:-:|
| node_boot_time_seconds | Node - Boot time | Second | Average | Unix time of last boot | Cluster, Node | Yes | | node_cpu_seconds_total | Node - CPU | Second | Average | CPU usage | Cluster, Node, CPU, Mode | Yes | | node_disk_read_time_seconds_total | Node - Disk - Read Time | Second | Average | Disk read time | Cluster, Node, Device | Yes |
This section provides the list of metrics collected from the different component
| node_disk_write_time_seconds_total | Node - Disk - Write Time | Second | Average | Disk write time | Cluster, Node, Device | Yes | | node_disk_writes_completed_total | Node - Disk - Write Completed | Count | Average | Disk writes completed | Cluster, Node, Device | Yes | | node_entropy_available_bits | Node - Entropy Available | Bits | Average | Available node entropy | Cluster, Node | Yes |
-| node_filesystem_avail_bytes | Node - Disk - Available (TBD) | Byte | Average | Available filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_avail_bytes | Node - Disk - Available (TBD) | Byte | Average | Available filesystem size | Cluster, Node, Mountpoint | Yes |
| node_filesystem_free_bytes | Node - Disk - Free (TBD) | Byte | Average | Free filesystem size | Cluster, Node, Mountpoint | Yes | | node_filesystem_size_bytes | Node - Disk - Size | Byte | Average | Filesystem size | Cluster, Node, Mountpoint | Yes | | node_filesystem_files | Node - Disk - Files | Count | Average | Total number of permitted inodes | Cluster, Node, Mountpoint | Yes |
This section provides the list of metrics collected from the different component
| node_filesystem_device_error | Node - Disk - FS Device error | Count | Average | indicates if there was a problem getting information for the filesystem | Cluster, Node, Mountpoint | Yes | | node_filesystem_readonly | Node - Disk - Files Readonly | Count | Average | indicates if the filesystem is readonly | Cluster, Node, Mountpoint | Yes | | node_hwmon_temp_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for temperature | Cluster, Node, Chip, Sensor | Yes |
-| node_hwmon_temp_max_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for maximum temperature | Cluster, Node, Chip, Sensor | Yes |
+| node_hwmon_temp_max_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for maximum temperature | Cluster, Node, Chip, Sensor | Yes |
| node_load1 | Node - Memory | Second | Average | 1m load average. | Cluster, Node | Yes | | node_load15 | Node - Memory | Second | Average | 15m load average. | Cluster, Node | Yes | | node_load5 | Node - Memory | Second | Average | 5m load average. | Cluster, Node | Yes |
This section provides the list of metrics collected from the different component
| node_os_info | Node - OS Info | Labels | Average | OS details | Cluster, Node | Yes | | node_network_carrier_changes_total | Node Network - Carrier changes | Count | Average | carrier_changes_total value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes | | node_network_receive_packets_total | NodeNetwork - receive packets | Count | Average | Network device statistic receive_packets. | Cluster, node, Device | Yes |
-| node_network_transmit_packets_total | NodeNetwork - transmit packets | Count | Average | Network device statistic transmit_packets. | Cluster, node, Device | Yes |
+| node_network_transmit_packets_total | NodeNetwork - transmit packets | Count | Average | Network device statistic transmit_packets. | Cluster, node, Device | Yes |
| node_network_up | Node Network - Interface state | Labels | Average | Value is 1 if operstate is 'up', 0 otherwise. | Cluster, node, Device | Yes | | node_network_mtu_bytes | Network Interface - MTU | Byte | Average | mtu_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes | | node_network_receive_errs_total | Network Interface - Error totals | Count | Average | Network device statistic receive_errs | Cluster, node, Device | Yes | | node_network_receive_multicast_total | Network Interface - Multicast | Count | Average | Network device statistic receive_multicast. | Cluster, node, Device | Yes | | node_network_speed_bytes | Network Interface - Speed | Byte | Average | speed_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_transmit_errs_total | Network Interface - Error totals | Count | Average | Network device statistic transmit_errs. | Cluster, node, Device | Yes |
+| node_network_transmit_errs_total | Network Interface - Error totals | Count | Average | Network device statistic transmit_errs. | Cluster, node, Device | Yes |
| node_timex_sync_status | Node Timex | Labels | Average | Is clock synchronized to a reliable server (1 = yes, 0 = no). | Cluster, Node | Yes | | node_timex_maxerror_seconds | Node Timex | Second | Average | Maximum error in seconds. | Cluster, Node | Yes | | node_timex_offset_seconds | Node Timex | Second | Average | Time offset in between local system and reference clock. | Cluster, Node | Yes |
-| node_vmstat_oom_kill | Node VM Stat | Count | Average | /proc/vmstat information field oom_kill. | Cluster, Node | Yes |
+| node_vmstat_oom_kill | Node VM Stat | Count | Average | /proc/vmstat information field oom_kill. | Cluster, Node | Yes |
| node_vmstat_pswpin | Node VM Stat | Count | Average | /proc/vmstat information field pswpin. | Cluster, Node | Yes | | node_vmstat_pswpout | Node VM Stat | Count | Average | /proc/vmstat information field pswpout | Cluster, Node | Yes | | node_dmi_info | Node Bios Information | Labels | Average | Node environment information | Cluster, Node | Yes |
This section provides the list of metrics collected from the different component
| idrac_sensors_temperature | Node - Temperature | Celcius | Average | Idrac sensor Temperature | Cluster, Node, Name | Yes | | idrac_power_on | Node - Power | Labels | Average | Idrac Power On Status | Cluster, Node | Yes |
-## Pure storage
+## Virtual Machine orchestrator
+### ***kubevirt***
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|:-:|:--:|:-:|-|::|:-:|
+| kubevirt_info | Host | Labels | NA | Version information. | Cluster, Node | Yes |
+| kubevirt_virt_controller_leading | Kubevirt Controller | Labels | Average | Indication for an operating virt-controller. | Cluster, Pod | Yes |
+| kubevirt_virt_operator_ready | Kubevirt Operator | Labels | Average | Indication for a virt operator being ready | Cluster, Pod | Yes |
+| kubevirt_vmi_cpu_affinity | VM-CPU | Labels | Average | Details the cpu pinning map via boolean labels in the form of vcpu_X_cpu_Y. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_actual_balloon_bytes | VM-Memory | Byte | Average | Current balloon size in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_domain_total_bytes | VM-Memory | Byte | Average | The amount of memory in bytes allocated to the domain. The memory value in domain xml file | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_swap_in_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of data read from swap space of the guest in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_swap_out_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of memory written out to swap space of the guest in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_available_bytes | VM-Memory | Byte | Average | Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_unused_bytes | VM-Memory | Byte | Average | The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free | Cluster, Node, VM | Yes |
+| kubevirt_vmi_network_receive_packets_total | VM-Network | Count | Average | Total network traffic received packets. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_network_transmit_packets_total | VM-Network | Count | Average | Total network traffic transmitted packets. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_network_transmit_packets_dropped_total | VM-Network | Count | Average | The total number of tx packets dropped on vNIC interfaces. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_outdated_count | VMI | Count | Average | Indication for the total number of VirtualMachineInstance workloads that are not running within the most up-to-date version of the virt-launcher environment. | Cluster, Node, VM, Phase | Yes |
+| kubevirt_vmi_phase_count | VMI | Count | Average | Sum of VMIs per phase and node. | Cluster, Node, VM, Phase | Yes |
+| kubevirt_vmi_storage_iops_read_total | VM-Storage | Count | Average | Total number of I/O read operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_iops_write_total | VM-Storage | Count | Average | Total number of I/O write operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_read_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on read operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_write_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on write operations | Cluster, Node, VM, Drive | Yes |
+| kubevirt_virt_controller_ready | Kubevirt Controller | Labels | Average | Indication for a virt-controller that is ready to take the lead. | Cluster, Pod | Yes |
+
+## Storage Appliances
+### ***pure storage***
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|-|--|-|-||-|
+|-|:-:|:--:|:-:|-|::|:-:|
| purefa_hardware_component_health | FlashArray | Labels | NA | FlashArray hardware component health status | Cluster, Appliance, Controller+Component+Index | Yes | | purefa_hardware_power_volts | FlashArray | Volt | Average | FlashArray hardware power supply voltage | Cluster, Power Supply, Appliance | Yes | | purefa_volume_performance_throughput_bytes | Volume | Byte | Average | FlashArray volume throughput | Cluster, Volume, Dimension, Appliance | Yes |
This section provides the list of metrics collected from the different component
| purefa_host_performance_latency_usec | Host | MicroSecond | Average | FlashArray host IO latency | Cluster, Node, Dimension, Appliance | Yes | | purefa_host_performance_bandwidth_bytes | Host | Byte | Average | FlashArray host bandwidth | Cluster, Node, Dimension, Appliance | Yes | | purefa_host_space_bytes | Host | Byte | Average | FlashArray host volumes allocated space | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_performance_iops | Host | Count | Average | FlashArray host IOPS | Cluster, Node, Dimension, Appliance | Yes |
-
-## Typha
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-|-|-|--|-|-||-|
-| typha_connections_accepted | Typha | Count | Average | Total number of connections accepted over time. | Cluster, Node | Yes |
-| typha_connections_dropped | Typha | Count | Average | Total number of connections dropped due to rebalancing. | Cluster, Node | Yes |
-| typha_ping_latency_count | Typha | Count | Average | Round-trip ping latency to client. | Cluster, Node | Yes |
-
-## Collected set of metrics
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-| - | -- | -- | - | | - | -- |
-| purefa_hardware_component_health | FlashArray | Labels | NA | FlashArray hardware component health status | Cluster, Appliance, Controller+Component+Index | Yes |
-| purefa_hardware_power_volts | FlashArray | Volt | Average | FlashArray hardware power supply voltage | Cluster, Power Supply, Appliance | Yes |
-| purefa_volume_performance_throughput_bytes | Volume | Byte | Average | FlashArray volume throughput | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_space_datareduction_ratio | Volume | Count | Average | FlashArray volumes data reduction ratio | Cluster, Volume, Appliance | Yes |
-| purefa_hardware_temperature_celsius | FlashArray | Celcius | Average | FlashArray hardware temperature sensors | Cluster, Controller, Sensor, Appliance | Yes |
-| purefa_alerts_total | FlashArray | Count | Average | Number of alert events | Cluster, Severity | Yes |
-| purefa_array_performance_iops | FlashArray | Count | Average | FlashArray IOPS | Cluster, Dimension, Appliance | Yes |
-| purefa_array_performance_qdepth | FlashArray | Count | Average | FlashArray queue depth | Cluster, Appliance | Yes |
-| purefa_info | FlashArray | Labels | NA | FlashArray host volumes connections | Cluster, Array | Yes |
-| purefa_volume_performance_latency_usec | Volume | MicroSecond | Average | FlashArray volume IO latency | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_space_bytes | Volume | Byte | Average | FlashArray allocated space | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_performance_iops | Volume | Count | Average | FlashArray volume IOPS | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_space_size_bytes | Volume | Byte | Average | FlashArray volumes size | Cluster, Volume, Appliance | Yes |
-| purefa_array_performance_latency_usec | FlashArray | MicroSecond | Average | FlashArray latency | Cluster, Dimension, Appliance | Yes |
-| purefa_array_space_used_bytes | FlashArray | Byte | Average | FlashArray overall used space | Cluster, Dimension, Appliance | Yes |
-| purefa_array_performance_bandwidth_bytes | FlashArray | Byte | Average | FlashArray bandwidth | Cluster, Dimension, Appliance | Yes |
-| purefa_array_performance_avg_block_bytes | FlashArray | Byte | Average | FlashArray avg block size | Cluster, Dimension, Appliance | Yes |
-| purefa_array_space_datareduction_ratio | FlashArray | Count | Average | FlashArray overall data reduction | Cluster, Appliance | Yes |
-| purefa_array_space_capacity_bytes | FlashArray | Byte | Average | FlashArray overall space capacity | Cluster, Appliance | Yes |
-| purefa_array_space_provisioned_bytes | FlashArray | Byte | Average | FlashArray overall provisioned space | Cluster, Appliance | Yes |
-| purefa_host_space_datareduction_ratio | Host | Count | Average | FlashArray host volumes data reduction ratio | Cluster, Node, Appliance | Yes |
-| purefa_host_space_size_bytes | Host | Byte | Average | FlashArray host volumes size | Cluster, Node, Appliance | Yes |
-| purefa_host_performance_latency_usec | Host | MicroSecond | Average | FlashArray host IO latency | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_performance_bandwidth_bytes | Host | Byte | Average | FlashArray host bandwidth | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_space_bytes | Host | Byte | Average | FlashArray host volumes allocated space | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_performance_iops | Host | Count | Average | FlashArray host IOPS | Cluster, Node, Dimension, Appliance | Yes |
-| kubevirt_info | Host | Labels | NA | Version information. | Cluster, Node | Yes |
-| kubevirt_virt_controller_leading | Kubevirt Controller | Labels | Average | Indication for an operating virt-controller. | Cluster, Pod | Yes |
-| kubevirt_virt_operator_ready | Kubevirt Operator | Labels | Average | Indication for a virt operator being ready | Cluster, Pod | Yes |
-| kubevirt_vmi_cpu_affinity | VM-CPU | Labels | Average | Details the cpu pinning map via boolean labels in the form of vcpu_X_cpu_Y. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_actual_balloon_bytes | VM-Memory | Byte | Average | Current balloon size in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_domain_total_bytes | VM-Memory | Byte | Average | The amount of memory in bytes allocated to the domain. The memory value in domain xml file | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_swap_in_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of data read from swap space of the guest in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_swap_out_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of memory written out to swap space of the guest in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_available_bytes | VM-Memory | Byte | Average | Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_unused_bytes | VM-Memory | Byte | Average | The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free | Cluster, Node, VM | Yes |
-| kubevirt_vmi_network_receive_packets_total | VM-Network | Count | Average | Total network traffic received packets. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_network_transmit_packets_total | VM-Network | Count | Average | Total network traffic transmitted packets. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_network_transmit_packets_dropped_total | VM-Network | Count | Average | The total number of tx packets dropped on vNIC interfaces. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_outdated_count | VMI | Count | Average | Indication for the total number of VirtualMachineInstance workloads that are not running within the most up-to-date version of the virt-launcher environment. | Cluster, Node, VM, Phase | Yes |
-| kubevirt_vmi_phase_count | VMI | Count | Average | Sum of VMIs per phase and node. | Cluster, Node, VM, Phase | Yes |
-| kubevirt_vmi_storage_iops_read_total | VM-Storage | Count | Average | Total number of I/O read operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_iops_write_total | VM-Storage | Count | Average | Total number of I/O write operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_read_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on read operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_write_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on write operations | Cluster, Node, VM, Drive | Yes |
-| kubevirt_virt_controller_ready | Kubevirt Controller | Labels | Average | Indication for a virt-controller that is ready to take the lead. | Cluster, Pod | Yes |
-| coredns_dns_requests_total | DNS Requests | Count | Average | total query count | Cluster, Node, Protocol | Yes |
-| coredns_dns_responses_total | DNS response/errors | Count | Average | response per zone, rcode and plugin. | Cluster, Node, Rcode | Yes |
-| coredns_health_request_failures_total | DNS Health Request Failures | Count | Average | The number of times the internal health check loop failed to query | Cluster, Node | Yes |
-| coredns_panics_total | DNS panic | Count | Average | total number of panics | Cluster, Node | Yes |
-| kube_daemonset_status_current_number_scheduled | Kube Daemonset | Count | Average | Number of Daemonsets scheduled | Cluster | Yes |
-| kube_daemonset_status_desired_number_scheduled | Kube Daemonset | Count | Average | Number of daemoset replicas desired | Cluster | Yes |
-| kube_deployment_status_replicas_ready | Kube Deployment | Count | Average | Number of deployment replicas present | Cluster | Yes |
-| kube_deployment_status_replicas_available | Kube Deployment | Count | Average | Number of deployment replicas available | Cluster | Yes |
-| kube_job_status_active | Kube job - Active | Labels | Average | Number of actively running jobs | Cluster, Job | Yes |
-| kube_job_status_failed | Kube job - Failed | Labels | Average | Number of failed jobs | Cluster, Job | Yes |
-| kube_job_status_succeeded | Kube job - Succeeded | Labels | Average | Number of successful jobs | Cluster, Job | Yes |
-| kube_node_status_allocatable | Node - Allocatable | Labels | Average | The amount of resources allocatable for pods | Cluster, Node, Resource | Yes |
-| kube_node_status_capacity | Node - Capacity | Labels | Average | The total amount of resources available for a node | Cluster, Node, Resource | Yes |
-| kube_node_status_condition | Kubenode status | Labels | Average | The condition of a cluster node | Cluster, Node, Condition, Status | Yes |
-| kube_pod_container_resource_limits | Pod container - Limits | Count | Average | The number of requested limit resource by a container. | Cluster, Node, Resource, Pod | Yes |
-| kube_pod_container_resource_requests | Pod container - Requests | Count | Average | The number of requested request resource by a container. | Cluster, Node, Resource, Pod | Yes |
-| kube_pod_container_state_started | Pod container - state | Second | Average | Start time in unix timestamp for a pod container | Cluster, Node, Container | Yes |
-| kube_pod_container_status_last_terminated_reason | Pod container - state | Labels | Average | Describes the last reason the container was in terminated state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_container_status_ready | Container State | Labels | Average | Describes whether the containers readiness check succeeded | Cluster, Node, Container | Yes |
-| kube_pod_container_status_restarts_total | Container State | Count | Average | The number of container restarts per container | Cluster, Node, Container | Yes |
-| kube_pod_container_status_running | Container State | Labels | Average | Describes whether the container is currently in running state | Cluster, Node, Container | Yes |
-| kube_pod_container_status_terminated | Container State | Labels | Average | Describes whether the container is currently in terminated state | Cluster, Node, Container | Yes |
-| kube_pod_container_status_terminated_reason | Container State | Labels | Average | Describes the reason the container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_container_status_waiting | Container State | Labels | Average | Describes whether the container is currently in waiting state | Cluster, Node, Container | Yes |
-| kube_pod_container_status_waiting_reason | Container State | Labels | Average | Describes the reason the container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_deletion_timestamp | Pod Deletion Timestamp | Timestamp | NA | Unix deletion timestamp | Cluster, Pod | Yes |
-| kube_pod_init_container_status_ready | Init Container State | Labels | Average | Describes whether the init containers readiness check succeeded | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_restarts_total | Init Container State | Count | Average | The number of restarts for the init container | Cluster, Container | Yes |
-| kube_pod_init_container_status_running | Init Container State | Labels | Average | Describes whether the init container is currently in running state | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_terminated | Init Container State | Labels | Average | Describes whether the init container is currently in terminated state | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_terminated_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_init_container_status_waiting | Init Container State | Labels | Average | Describes whether the init container is currently in waiting state | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_waiting_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_status_phase | Pod Status | Labels | Average | The pods current phase | Cluster, Node, Container, Phase | Yes |
-| kube_pod_status_ready | Pod Status Ready | Count | Average | Describe whether the pod is ready to serve requests. | Cluster, Pod | Yes |
-| kube_pod_status_reason | Pod Status Reason | Labels | Average | The pod status reasons | Cluster, Node, Container, Reason | Yes |
-| kube_statefulset_replicas | Statefulset # of replicas | Count | Average | The number of desired pods for a statefulset | Cluster, Stateful Set | Yes |
-| kube_statefulset_status_replicas | Statefulset replicas status | Count | Average | The number of replicas per statefulsets | Cluster, Stateful Set | Yes |
-| controller_runtime_reconcile_errors_total | Kube Controller | Count | Average | Total number of reconciliation errors per controller | Cluster, Node, Controller | Yes |
-| controller_runtime_reconcile_total | Kube Controller | Count | Average | Total number of reconciliation per controller | Cluster, Node, Controller | Yes |
-| node_boot_time_seconds | Node - Boot time | Second | Average | Unix time of last boot | Cluster, Node | Yes |
-| node_cpu_seconds_total | Node - CPU | Second | Average | CPU usage | Cluster, Node, CPU, Mode | Yes |
-| node_disk_read_time_seconds_total | Node - Disk - Read Time | Second | Average | Disk read time | Cluster, Node, Device | Yes |
-| node_disk_reads_completed_total | Node - Disk - Read Completed | Count | Average | Disk reads completed | Cluster, Node, Device | Yes |
-| node_disk_write_time_seconds_total | Node - Disk - Write Time | Second | Average | Disk write time | Cluster, Node, Device | Yes |
-| node_disk_writes_completed_total | Node - Disk - Write Completed | Count | Average | Disk writes completed | Cluster, Node, Device | Yes |
-| node_entropy_available_bits | Node - Entropy Available | Bits | Average | Available node entropy | Cluster, Node | Yes |
-| node_filesystem_avail_bytes | Node - Disk - Available (TBD) | Byte | Average | Available filesystem size | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_free_bytes | Node - Disk - Free (TBD) | Byte | Average | Free filesystem size | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_size_bytes | Node - Disk - Size | Byte | Average | Filesystem size | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_files | Node - Disk - Files | Count | Average | Total number of permitted inodes | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_files_free | Node - Disk - Files Free | Count | Average | Total number of free inodes | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_device_error | Node - Disk - FS Device error | Count | Average | indicates if there was a problem getting information for the filesystem | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_readonly | Node - Disk - Files Readonly | Count | Average | indicates if the filesystem is readonly | Cluster, Node, Mountpoint | Yes |
-| node_hwmon_temp_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for temperature | Cluster, Node, Chip, Sensor | Yes |
-| node_hwmon_temp_max_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for maximum temperature | Cluster, Node, Chip, Sensor | Yes |
-| node_load1 | Node - Memory | Second | Average | 1m load average. | Cluster, Node | Yes |
-| node_load15 | Node - Memory | Second | Average | 15m load average. | Cluster, Node | Yes |
-| node_load5 | Node - Memory | Second | Average | 5m load average. | Cluster, Node | Yes |
-| node_memory_HardwareCorrupted_bytes | Node - Memory | Byte | Average | Memory information field HardwareCorrupted_bytes. | Cluster, Node | Yes |
-| node_memory_MemAvailable_bytes | Node - Memory | Byte | Average | Memory information field MemAvailable_bytes. | Cluster, Node | Yes |
-| node_memory_MemFree_bytes | Node - Memory | Byte | Average | Memory information field MemFree_bytes. | Cluster, Node | Yes |
-| node_memory_MemTotal_bytes | Node - Memory | Byte | Average | Memory information field MemTotal_bytes. | Cluster, Node | Yes |
-| node_memory_numa_HugePages_Free | Node - Memory | Byte | Average | Free hugepages | Cluster, Node. NUMA | Yes |
-| node_memory_numa_HugePages_Total | Node - Memory | Byte | Average | Total hugepages | Cluster, Node. NUMA | Yes |
-| node_memory_numa_MemFree | Node - Memory | Byte | Average | Numa memory free | Cluster, Node. NUMA | Yes |
-| node_memory_numa_MemTotal | Node - Memory | Byte | Average | Total Numa memory | Cluster, Node. NUMA | Yes |
-| node_memory_numa_MemUsed | Node - Memory | Byte | Average | Numa memory used | Cluster, Node. NUMA | Yes |
-| node_memory_numa_Shmem | Node - Memory | Byte | Average | Shared memory | Cluster, Node | Yes |
-| node_os_info | Node - OS Info | Labels | Average | OS details | Cluster, Node | Yes |
-| node_network_carrier_changes_total | Node Network - Carrier changes | Count | Average | carrier_changes_total value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_receive_packets_total | NodeNetwork - receive packets | Count | Average | Network device statistic receive_packets. | Cluster, node, Device | Yes |
-| node_network_transmit_packets_total | NodeNetwork - transmit packets | Count | Average | Network device statistic transmit_packets. | Cluster, node, Device | Yes |
-| node_network_up | Node Network - Interface state | Labels | Average | Value is 1 if operstate is 'up', 0 otherwise. | Cluster, node, Device | Yes |
-| node_network_mtu_bytes | Network Interface - MTU | Byte | Average | mtu_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_receive_errs_total | Network Interface - Error totals | Count | Average | Network device statistic receive_errs | Cluster, node, Device | Yes |
-| node_network_receive_multicast_total | Network Interface - Multicast | Count | Average | Network device statistic receive_multicast. | Cluster, node, Device | Yes |
-| node_network_speed_bytes | Network Interface - Speed | Byte | Average | speed_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_transmit_errs_total | Network Interface - Error totals | Count | Average | Network device statistic transmit_errs. | Cluster, node, Device | Yes |
-| node_timex_sync_status | Node Timex | Labels | Average | Is clock synchronized to a reliable server (1 = yes, 0 = no). | Cluster, Node | Yes |
-| node_timex_maxerror_seconds | Node Timex | Second | Average | Maximum error in seconds. | Cluster, Node | Yes |
-| node_timex_offset_seconds | Node Timex | Second | Average | Time offset in between local system and reference clock. | Cluster, Node | Yes |
-| node_vmstat_oom_kill | Node VM Stat | Count | Average | /proc/vmstat information field oom_kill. | Cluster, Node | Yes |
-| node_vmstat_pswpin | Node VM Stat | Count | Average | /proc/vmstat information field pswpin. | Cluster, Node | Yes |
-| node_vmstat_pswpout | Node VM Stat | Count | Average | /proc/vmstat information field pswpout | Cluster, Node | Yes |
-| node_dmi_info | Node Bios Information | Labels | Average | Node environment information | Cluster, Node | Yes |
-| node_time_seconds | Node - Time | Second | NA | System time in seconds since epoch (1970) | Cluster, Node | Yes |
-| container_fs_io_time_seconds_total | Containers - Filesystem | Second | Average | Cumulative count of seconds spent doing I/Os | Cluster, Node, Pod+Container+Interface | Yes |
-| container_memory_failcnt | Containers - Memory | Count | Average | Number of memory usage hits limits | Cluster, Node, Pod+Container+Interface | Yes |
-| container_memory_usage_bytes | Containers - Memory | Byte | Average | Current memory usage, including all memory regardless of when it was accessed | Cluster, Node, Pod+Container+Interface | Yes |
-| container_oom_events_total | Container OOM Events | Count | Average | Count of out of memory events observed for the container | Cluster, Node, Pod+Container | Yes |
-| container_start_time_seconds | Containers - Start Time | Second | Average | Start time of the container since unix epoch | Cluster, Node, Pod+Container+Interface | Yes |
-| container_tasks_state | Containers - Task state | Labels | Average | Number of tasks in given state | Cluster, Node, Pod+Container+Interface, State | Yes |
-| kubelet_running_containers | Containers - # of running | Labels | Average | Number of containers currently running | Cluster, node, Container State | Yes |
-| kubelet_running_pods | Pods - # of running | Count | Average | Number of pods that have a running pod sandbox | Cluster, Node | Yes |
-| kubelet_runtime_operations_errors_total | Kubelet Runtime Op Errors | Count | Average | Cumulative number of runtime operation errors by operation type. | Cluster, Node | Yes |
-| kubelet_volume_stats_available_bytes | Pods - Storage - Available | Byte | Average | Number of available bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
-| kubelet_volume_stats_capacity_bytes | Pods - Storage - Capacity | Byte | Average | Capacity in bytes of the volume | Cluster, Node, Persistent Volume Claim | Yes |
-| kubelet_volume_stats_used_bytes | Pods - Storage - Used | Byte | Average | Number of used bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
-| idrac_power_input_watts | Node - Power | Watt | Average | Power Input | Cluster, Node, PSU | Yes |
-| idrac_power_output_watts | Node - Power | Watt | Average | Power Output | Cluster, Node, PSU | Yes |
-| idrac_power_capacity_watts | Node - Power | Watt | Average | Power Capacity | Cluster, Node, PSU | Yes |
-| idrac_sensors_temperature | Node - Temperature | Celcius | Average | Idrac sensor Temperature | Cluster, Node, Name | Yes |
-| idrac_power_on | Node - Power | Labels | Average | Idrac Power On Status | Cluster, Node | Yes |
-| felix_ipsets_calico | Felix | Count | Average | Number of active Calico IP sets. | Cluster, Node | Yes |
-| felix_cluster_num_host_endpoints | Felix | Count | Average | Total number of host endpoints cluster-wide. | Cluster, Node | Yes |
-| felix_active_local_endpoints | Felix | Count | Average | Number of active endpoints on this host. | Cluster, Node | Yes |
-| felix_cluster_num_hosts | Felix | Count | Average | Total number of Calico hosts in the cluster. | Cluster, Node | Yes |
-| felix_cluster_num_workload_endpoints | Felix | Count | Average | Total number of workload endpoints cluster-wide. | Cluster, Node | Yes |
-| felix_int_dataplane_failures | Felix | Count | Average | Number of times dataplane updates failed and will be retried. | Cluster, Node | Yes |
-| felix_ipset_errors | Felix | Count | Average | Number of ipset command failures. | Cluster, Node | Yes |
-| felix_iptables_restore_errors | Felix | Count | Average | Number of iptables-restore errors. | Cluster, Node | Yes |
-| felix_iptables_save_errors | Felix | Count | Average | Number of iptables-save errors. | Cluster, Node | Yes |
-| felix_resyncs_started | Felix | Count | Average | Number of times Felix has started resyncing with the datastore. | Cluster, Node | Yes |
-| felix_resync_state | Felix | Count | Average | Current datastore state. | Cluster, Node | Yes |
-| typha_connections_accepted | Typha | Count | Average | Total number of connections accepted over time. | Cluster, Node | Yes |
-| typha_connections_dropped | Typha | Count | Average | Total number of connections dropped due to rebalancing. | Cluster, Node | Yes |
-| typha_ping_latency_count | Typha | Count | Average | Round-trip ping latency to client. | Cluster, Node | Yes |
-| etcd_disk_backend_commit_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of commit called by backend. | Cluster, Pod | Yes |
-| etcd_disk_wal_fsync_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of fsync called by wal | Cluster, Pod | Yes |
-| etcd_server_is_leader | Etcd Server | Labels | Average | Whether node is leader | Cluster, Pod | Yes |
-| etcd_server_is_learner | Etcd Server | Labels | Average | Whether node is learner | Cluster, Pod | Yes |
-| etcd_server_leader_changes_seen_total | Etcd Server | Count | Average | The number of leader changes seen. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_committed_total | Etcd Server | Count | Average | The total number of consensus proposals committed. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_applied_total | Etcd Server | Count | Average | The total number of consensus proposals applied. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_failed_total | Etcd Server | Count | Average | The total number of failed proposals seen. | Cluster, Pod, Tier | Yes |
-| apiserver_audit_requests_rejected_total | Apiserver | Count | Average | Counter of apiserver requests rejected due to an error in audit logging backend. | Cluster, Node | Yes |
-| apiserver_client_certificate_expiration_seconds_sum | Apiserver | Second | Sum | Distribution of the remaining lifetime on the certificate used to authenticate a request. | Cluster, Node | Yes |
-| apiserver_storage_data_key_generation_failures_total | Apiserver | Count | Average | Total number of failed data encryption key(DEK) generation operations. | Cluster, Node | Yes |
-| apiserver_tls_handshake_errors_total | Apiserver | Count | Average | Number of requests dropped with 'TLS handshake error from' error | Cluster, Node | Yes |
+| purefa_host_performance_iops | Host | Count | Average | FlashArray host IOPS | Cluster, Node, Dimension, Appliance | Yes |
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
Gather the `resourceId` values of the L2 and L3 isolation domains that you creat
#### Create a cloud services network
-Your VM requires at least one cloud services network. You need the egress endpoints that you want to add to the proxy for your VM to access.
+Your VM requires at least one cloud services network. You need the egress endpoints that you want to add to the proxy for your VM to access. This list should include any domains needed to pull images or access data, such as ".azurecr.io" or ".docker.io".
```azurecli az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \
You also need to configure the following information for your network. Valid val
##### Create a default CNI network for an AKS hybrid cluster
+Each cluster needs its own default CNI Network (Calico Network).
+ You need the following information: - The `resourceId` value of the L3 isolation domain that you created earlier to configure the VLAN for this network
You don't need to specify the network MTU here, because the network will be conf
##### Create a cloud services network for an AKS hybrid cluster
-You need the egress endpoints that you want to add to the proxy for your VM to access.
+You need the egress endpoints that you want to add to the proxy for your VM to access. This list should include any domains needed to pull images or access data, such as ".azurecr.io" or ".docker.io".
```azurecli az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \
operator-nexus Template Cloud Native Network Function Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/template-cloud-native-network-function-deployment.md
vNET, and finally the AKS-Hybrid cluster that will host the CNF.
## Common parameters ```bash
-export myloc="eastus"
-export myrg="****"
-export MSYS_NO_PATHCONV=1
-export mysub="******"
-export mynfid='******'
-export myplatcustloc='******'
-export myhakscustloc='******'
+export DC_LOCATION="eastus"
+export RESOURCE_GROUP="****"
+export SUBSCRIPTION="******"
+export CUSTOM_LOCATION='******'
+export HAKS_CUSTOM_LOCATION='******'
+export L3_ISD='******'
``` ## Initialization
-Set `$mysub` as the active subscription for your Operator Nexus instance.
+Set `$SUBSCRIPTION` as the active subscription for your Operator Nexus instance.
```azurecli
- az account set --subscription "$mysub"
+ az account set --subscription "$SUBSCRIPTION"
```
-Get list of `internalnetworks` in the L3 isolation-domain `$myl3isd`
+Get list of `internalnetworks` in the L3 isolation-domain `$L3_ISD`
```azurecli
- az nf internalnetwork list --l3domain "$myl3isd" \
- -g "$myrg" --subscription "$mysub"
+ az nf internalnetwork list --l3domain "$L3_ISD" \
+ -g "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION"
``` ## Create Cloud Services Network ```bash
-export mycsn="******"
+export CLOUD_SERVICES_NETWORK="******"
``` ```azurecli
-az networkcloud cloudservicesnetwork create --name "$mycsn" \
resource-group "$myrg" \subscription "$mysub" \extended-location name="$myplatcustloc" type="CustomLocation" \location "$myloc" \
+az networkcloud cloudservicesnetwork create --name "$CLOUD_SERVICES_NETWORK" \
+--resource-group "$RESOURCE_GROUP" \
+--subscription "$SUBSCRIPTION" \
+--extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+--location "$DC_LOCATION" \
--additional-egress-endpoints '[{ "category": "azure-resource-management", "endpoints": [{
az networkcloud cloudservicesnetwork create --name "$mycsn" \
### Validate Cloud Services Network has been created ```azurecli
-az networkcloud cloudservicesnetwork show --name "$mycsn" --resource-group "$myrg" --subscription "$mysub" -o table
+az networkcloud cloudservicesnetwork show --name "$CLOUD_SERVICES_NETWORK" --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION" -o table
``` ## Create Default CNI Network ```bash
-export myl3n=="******"
-export myalloctype="IPV4"
-export myvlan=****
-export myipv4sub=="******"
-export mymtu="9000"
-export myl3isdarm=="******"
+export DCN_NAME="******"
+export IP_ALLOCATION_TYPE="IPV4"
+export VLAN=****
+export IPV4_SUBNET="******"
+export L3_ISD_ARM="******"
``` ```azurecli
-az networkcloud defaultcninetwork create --name "$myl3n" \
- --resource-group "$myrg" \
- --subscription "$mysub" \
- --extended-location name="$myplatcustloc" type="CustomLocation" \
- --location "$myloc" \
+az networkcloud defaultcninetwork create --name "$DCN_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --subscription "$SUBSCRIPTION" \
+ --extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+ --location "$DC_LOCATION" \
--bgp-peers '[]' \ --community-advertisements '[{"communities": ["65535:65281", "65535:65282"], "subnetPrefix": "10.244.0.0/16"}]' \ --service-external-prefixes '["10.101.65.0/24"]' \ --service-load-balancer-prefixes '["10.101.66.0/24"]' \
- --ip-allocation-type "$myalloctype" \
- --ipv4-connected-prefix "$myipv4sub" \
- --l3-isolation-domain-id "$myl3isdarm" \
- --vlan $myvlan
+ --ip-allocation-type "$IP_ALLOCATION_TYPE" \
+ --ipv4-connected-prefix "$IPV4_SUBNET" \
+ --l3-isolation-domain-id "$L3_ISD_ARM" \
+ --vlan $VLAN
``` ### Validate Default CNI Network has been created ```azurecli
-az networkcloud defaultcninetwork show --name "$myl3n" \
- --resource-group "$myrg" --subscription "$mysub" -o table
+az networkcloud defaultcninetwork show --name "$DCN_NAME" \
+ --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION" -o table
``` ## Set AKS-Hybrid Extended Location ```bash
-export myhakscustloc=="******"
+export HAKS_CUSTOM_LOCATION="******"
``` ## Create AKS-Hybrid Network Cloud Services Network vNET
export myhakscustloc=="******"
The AKS-Hybrid (HAKS) Virtual Networks are different from the Azure to on-premises Virtual Networks. ```bash
-export myhaksvnetname=="******"
-export myncnw=="******"
+export HAKS_VNET_NAME="******"
+export NC_NETWORK="******"
``` ```azurecli az hybridaks vnet create \
- --name "$myhaksvnetname" \
- --resource-group "$myrg" \
- --subscription "$mysub" \
- --custom-location "$myhakscustloc" \
- --aods-vnet-id "$myncnw"
+ --name "$HAKS_VNET_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --subscription "$SUBSCRIPTION" \
+ --custom-location "$HAKS_CUSTOM_LOCATION" \
+ --aods-vnet-id "$NC_NETWORK"
``` ## Create AKS-Hybrid Cluster
az hybridaks vnet create \
The AKS-Hybrid (HAKS) cluster will be used to host the CNF. ```bash
-export myhaksvnet1=="******"
-export myhaksvnet2=="******"
-export myencodedkey=="******"
-export ="******"
-export myclustername=="******"
+export HAKS_VNET_1="******"
+export HAKS_VNET_2="******"
+export ENCODED_KEY="******"
+export AAD_ID="******"
+export HAKS_CLUSTER_NAME="******"
``` ```azurecli az hybridaks create \
- --name "$myclustername" \
- --resource-group "$myrg" \
- --subscription "$mysub" \
- --aad-admin-group-object-ids "$AADID" \
- --custom-location "$myhakscustloc" \
+ --name "$HAKS_CLUSTER_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --subscription "$SUBSCRIPTION" \
+ --aad-admin-group-object-ids "$AAD_ID" \
+ --custom-location "$HAKS_CUSTOM_LOCATION" \
--location eastus \ --control-plane-vm-size NC_G4_v1 \ --node-vm-size NC_H16_v1 \
az hybridaks create \
--load-balancer-sku stacked-kube-vip \ --load-balancer-count 0 \ --load-balancer-vm-size '' \
- --vnet-ids "$myhaksvnet1","$myhaksvnet2" \
- --ssh-key-value "$myencodedkey" \
+ --vnet-ids "$HAKS_VNET_1","$HAKS_VNET_2" \
+ --ssh-key-value "$ENCODED_KEY" \
--control-plane-count 3 \ --node-count 4 ```
operator-nexus Template Virtualized Network Function Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/template-virtualized-network-function-deployment.md
The first step is to create the workload L2 and L3 networks, followed by the cre
## Common parameters ```bash
-export myloc="eastus"
-export myrg="****"
-export MSYS_NO_PATHCONV=1
-export mysub="******"
-export mynfid='******'
-export myplatcustloc='******'
-export myhakscustloc='******'
-export myHybridAksPluginType='****'
+export DC_LOCATION="eastus"
+export RESOURCE_GROUP="****"
+export SUBSCRIPTION="******"
+export CUSTOM_LOCATION="******"
+export VM_NAME="******"
+export IP_ALLOCATION_TYPE="IPV4"
+export IPV4_SUBNET="******"
+export VLAN=****
+export L3_ISD_ARM="******"
+export L3_MGMT_NET_NAME="******"
+export L3_TRUSTED_NET_NAME="******"
+export L3_UNTRUSTED_NET_NAME="******"
+export L2_NETWORK_NAME="******"
+export L2_ISD_ARM="******"
+export VM_PARAMS="******"
```
-Note: hybrid-aks-plugin-type: valid values are `OSDevice`, `SR-IOV`, `DPDK`. Default: `SR-IOV`
- ## Initialization
-Set `$mysub` as the active subscription for your Operator Nexus instance.
+Set `$SUBSCRIPTION` as the active subscription for your Operator Nexus instance.
```azurecli
- az account set --subscription "$mysub"
+ az account set --subscription "$SUBSCRIPTION"
``` ## Create `cloudservicesnetwork` ```azurecli
-az networkcloud cloudservicesnetwork create --name "$mycsn" \
resource-group "$myrg" \subscription "$mysub" \extended-location name="$myplatcustloc" type="CustomLocation" \location "$myloc" \
+az networkcloud cloudservicesnetwork create --name "$CLOUD_SERVICES_NETWORK" \
+--resource-group "$RESOURCE_GROUP" \
+--subscription "$SUBSCRIPTION" \
+--extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+--location "$DC_LOCATION" \
--additional-egress-endpoints "[{\"category\":\"azure-resource-management\",\"endpoints\":[{\"domainName\":\"az \",\"port\":443}]}]" \ --debug ```
az networkcloud cloudservicesnetwork create --name "$mycsn" \
### Validate `cloudservicesnetwork` has been created ```azurecli
-az networkcloud cloudservicesnetwork show --name "$mycsn" --resource-group "$myrg" --subscription "$mysub" -o table
+az networkcloud cloudservicesnetwork show --name "$CLOUD_SERVICES_NETWORK" --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION" -o table
``` ## Create management L3network ```azurecli
-az networkcloud l3network create --name "$myl3n-mgmt" \
resource-group "$myrg" \subscription "$mysub" \extended-location name="$myplatcustloc" type="CustomLocation" \location "$myloc" \hybrid-aks-ipam-enabled "False" \hybrid-aks-plugin-type "$myHybridAksPluginType" \ip-allocation-type "$myalloctype" \ipv4-connected-prefix "$myipv4sub" \l3-isolation-domain-id "$myl3isdarm" \vlan $myvlan \
+az networkcloud l3network create --name "$L3_MGMT_NET_NAME" \
+--resource-group "$RESOURCE_GROUP" \
+--subscription "$SUBSCRIPTION" \
+--extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+--location "$DC_LOCATION" \
+--ip-allocation-type "$IP_ALLOCATION_TYPE" \
+--ipv4-connected-prefix "$IPV4_SUBNET" \
+--l3-isolation-domain-id "$L3_ISD_ARM" \
+--vlan $VLAN \
--debug ``` ### Validate `l3network` has been created ```azurecli
-az networkcloud l3network show --name "$myl3n-mgmt" \
- --resource-group "$myrg" --subscription "$mysub"
+az networkcloud l3network show --name "$L3_MGMT_NET_NAME" \
+ --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION"
``` ## Create trusted L3network ```azurecli
-az networkcloud l3network create --name "$myl3n-trust" \
resource-group "$myrg" \subscription "$mysub" \extended-location name="$myplatcustloc" type="CustomLocation" \location "$myloc" \hybrid-aks-ipam-enabled "False" \hybrid-aks-plugin-type "$myHybridAksPluginType" \ip-allocation-type "$myalloctype" \ipv4-connected-prefix "$myipv4sub" \l3-isolation-domain-id "$myl3isdarm" \vlan $myvlan \
+az networkcloud l3network create --name "$L3_TRUSTED_NET_NAME" \
+--resource-group "$RESOURCE_GROUP" \
+--subscription "$SUBSCRIPTION" \
+--extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+--location "$DC_LOCATION" \
+--ip-allocation-type "$IP_ALLOCATION_TYPE" \
+--ipv4-connected-prefix "$IPV4_SUBNET" \
+--l3-isolation-domain-id "$L3_ISD_ARM" \
+--vlan $VLAN \
--debug ``` ### Validate trusted `l3network` has been created ```azurecli
-az networkcloud l3network show --name "$myl3n-trust" \
- --resource-group "$myrg" --subscription "$mysub"
+az networkcloud l3network show --name "$L3_TRUSTED_NET_NAME" \
+ --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION"
``` ## Create untrusted L3network ```azurecli
-az networkcloud l3network create --name "$myl3n-untrust" \
resource-group "$myrg" \subscription "$mysub" \extended-location name="$myplatcustloc" type="CustomLocation" \location "$myloc" \hybrid-aks-ipam-enabled "False" \hybrid-aks-plugin-type "$myHybridAksPluginType" \ip-allocation-type "$myalloctype" \ipv4-connected-prefix "$myipv4sub" \l3-isolation-domain-id "$myl3isdarm" \vlan $myvlan \
+az networkcloud l3network create --name "$L3_UNTRUSTED_NET_NAME" \
+--resource-group "$RESOURCE_GROUP" \
+--subscription "$SUBSCRIPTION" \
+--extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+--location "$DC_LOCATION" \
+--ip-allocation-type "$IP_ALLOCATION_TYPE" \
+--ipv4-connected-prefix "$IPV4_SUBNET" \
+--l3-isolation-domain-id "$L3_ISD_ARM" \
+--vlan $VLAN \
--debug ``` ### Validate untrusted `l3network` has been created ```azurecli
-az networkcloud l3network show --name "$myl3n-untrust" \
- --resource-group "$myrg" --subscription "$mysub"
+az networkcloud l3network show --name "$L3_UNTRUSTED_NET_NAME" \
+ --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION"
``` ## Create L2network ```azurecli
-az networkcloud l2network create --name "$myl2n" \
resource-group "$myrg" \subscription "$mysub" \extended-location name="$myplatcustloc" type="CustomLocation" \location "$myloc" \hybrid-aks-plugin-type "$myHybridAksPluginType" \l2-isolation-domain-id "$myl2isdarm" \
+az networkcloud l2network create --name "$L2_NETWORK_NAME" \
+--resource-group "$RESOURCE_GROUP" \
+--subscription "$SUBSCRIPTION" \
+--extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+--location "$DC_LOCATION" \
+--l2-isolation-domain-id "$L2_ISD_ARM" \
--debug ``` ### Validate `l2network` has been created ```azurecli
-az networkcloud l2network show --name "$myl2n" --resource-group "$myrg" --subscription "$mysub"
+az networkcloud l2network show --name "$L2_NETWORK_NAME" --resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION"
``` ## Create Virtual Machine and deploy VNF
az networkcloud l2network show --name "$myl2n" --resource-group "$myrg" --subscr
The virtual machine parameters include the VNF image. ```azurecli
-az networkcloud virtualmachine create --name "$myvm" \
resource-group "$myrg" --subscription "$mysub" \virtual-machine-parameters "$vmparm" \
+az networkcloud virtualmachine create --name "$VM_NAME" \
+--resource-group "$RESOURCE_GROUP" --subscription "$SUBSCRIPTION" \
+--virtual-machine-parameters "$VM_PARAMS" \
--debug ```
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
Select 'Raw XML' and then **paste the modem config raw (without JSON escapement)
### Named modem configuration We currently support the following named modem configurations.
-| Public Satellite Service | Named modem string | Note |
+| **Public Satellite Service** | **Named modem string** | **Note** |
|--|--|--| | Aqua Direct Broadcast | aqua_direct_broadcast | This is NASA AQUA's 15-Mbps direct broadcast service | | Aqua Direct Playback | aqua_direct_playback | This is NASA's AQUA's 150-Mbps direct broadcast service |
-| Terra Direct Broadcast | terra_direct_broadcast | This is NASA Terra's 8.3-Mbps direct broadcast service |
+| Terra Direct Broadcast | terra_direct_broadcast | This is NASA Terra's 13.125-Mbps direct broadcast service |
| SNPP Direct Broadcast | snpp_direct_broadcast | This is NASA SNPP 15-Mbps direct broadcast service | | JPSS-1 Direct Broadcast | jpss-1_direct_broadcast | This is NASA JPSS-1 15-Mbps direct broadcast service |
We currently support the following named modem configurations.
> > Orbital does not have control over the downlink schedules for these public satellites. NASA conducts their own operations which may interrupt the downlink availabilities.
+ | **Spacecraft Title** | **Aqua** |**Suomi NPP**|**JPSS-1/NOAA-20**| **Terra** |
+ | : | :-: | :-: | :-: | :-: |
+ | `noradId:` | 27424 | 37849 | 43013 | 25994 |
+ | `centerFrequencyMhz:` | 8160, | 7812, | 7812, | 8212.5, |
+ | `bandwidthMhz:` | 15 | 30 | 30 | 45 |
+ | `direction:` | Downlink, | Downlink, | Downlink, | Downlink, |
+ | `polarization:` | RHCP | RHCP | RHCP | RHCP |
+ #### Specifying a named modem configuration using the API Enter the named modem string into the demodulationConfiguration parameter when using the API.
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Ensure the objects comply with the recommendations in this article. Note that th
## Prepare subnet for VNET injection Prerequisites:-- An entire subnet with no existing IPs allocated or in use that can be dedicated to the Azure Orbital Ground Station service in your virtual network in your resource group.
+- An entire subnet with no existing IPs allocated or in use that can be dedicated to the Azure Orbital Ground Station service in your virtual network within your resource group.
-Steps:
-1. Delegate a subnet to service named: Microsoft.Orbital/orbitalGateways. Follow instructions here: [Add or remove a subnet delegation in an Azure virtual network](../virtual-network/manage-subnet-delegation.md).
+Delegate a subnet to service named: Microsoft.Orbital/orbitalGateways. Follow instructions here: [Add or remove a subnet delegation in an Azure virtual network](../virtual-network/manage-subnet-delegation.md).
> [!NOTE] > Address range needs to be at least /24 (e.g., 10.0.0.0/23)
The following is an example of a typical VNET setup with a subnet delegated to A
Set the MTU of all desired endpoints to at least 3650.
-## Setting up the contact profile
+## Set up the contact profile
Prerequisites: - The subnet/vnet is in the same region as the contact profile. Ensure the contact profile properties are set as follows:
-| Property | Setting |
+| **Property** | **Setting** |
|-||
-| subnetId | Enter the full ID to the delegated subnet, which can be found inside the VNET's JSON view. subnetID is found under networkConfiguration. |
-| ipAddress | For each link, enter an IP for TCP/UDP server mode. Leave blank for TCP/UDP client mode. See section below for a detailed explanation on configuring this property. |
-| port | For each link, post must be within 49152 and 65535 range and must be unique across all links in the contact profile.|
+| subnetId | Enter the **full ID to the delegated subnet**, which can be found inside the VNET's JSON view. subnetID is found under networkConfiguration. |
+| ipAddress | For each link, enter an **IP for TCP/UDP server mode**. Leave blank for TCP/UDP client mode. See section below for a detailed explanation on configuring this property. |
+| port | For each link, port must be within 49152 and 65535 range and must be unique across all links in the contact profile. |
> [!NOTE] > You can have multiple links/channels in a contact profile, and you can have multiple IPs. But the combination of port/protocol must be unique. You cannot have two identical ports, even if you have two different destination IPs.
-## Scheduling the contact
+For more information, learn about [contact profiles](https://learn.microsoft.com/azure/orbital/concepts-contact-profile) and [how to configure a contact profile](https://learn.microsoft.com/azure/orbital/contact-profile).
-The platform pre-reserves IPs in the subnet when the contact is scheduled. These IPs represent the platform side endpoints for each link. IPs will be unique between contacts, and if multiple concurrent contacts are using the same subnet, Microsoft guarantees those IPs to be distinct. The service will fail to schedule the contact and an error will be returned if the service runs out of IPs or cannot allocate an IP.
+## Schedule the contact
+
+The Azure Orbital Ground Station platform pre-reserves IPs in the subnet when a contact is scheduled. These IPs represent the platform side endpoints for each link. IPs will be unique between contacts, and if multiple concurrent contacts are using the same subnet, Microsoft guarantees those IPs to be distinct. The service will fail to schedule the contact and an error will be returned if the service runs out of IPs or cannot allocate an IP.
When you create a contact, you can find these IPs by viewing the contact properties. Select JSON view in the portal or use the GET contact API call to view the contact properties. Make sure to use the current API version of 2022-03-01. The parameters of interest are below:
-| Parameter | Usage |
+| **Parameter** | **Usage** |
||-| | antennaConfiguration.destinationIP | Connect to this IP when you configure the link as tcp/udp client. | | antennaConfiguration.sourceIps | Data will come from this IP when you configure the link as tcp/udp server. |
You can use this information to set up network policies or to distinguish betwee
> - Only one destination IP is present. Any link in client mode should connect to this IP and the links are differentiated based on port. > - Many source IPs can be present. Links in server mode will connect to your specified IP address in the contact profile. The flows will originate from the source IPs present in this field and target the port as per the link details in the contact profile. There is no fixed assignment of link to source IP so please make sure to allow all IPs in any networking setup or firewalls.
+For more information, learn about [contacts](https://learn.microsoft.com/azure/orbital/concepts-contact) and [how to schedule a contact](https://learn.microsoft.com/azure/orbital/schedule-contact).
## Client/Server, TCP/UDP, and link direction
-The following sections describe how to set up the link flows based on direction on tcp or udp preference.
+The following sections describe how to set up the link flows based on direction on TCP or UDP preference.
+
+> [!NOTE]
+> These settings are for managed modems only.
### Uplink
-| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
-|--|-|--|-|--|
-| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Blank | Not applicable |
-| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable |
-| **Output** | | | | |
-| Contact Object destinationIP | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
-| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Not applicable |
+| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
+|:-|:|:-|:|:|
+| _Contact Profile Link ipAddress_ | Blank | Routable IP from delegated subnet | Blank | Not applicable |
+| _Contact Profile Link port_ | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable |
+| **Output** | | | | |
+| _Contact Object destinationIP_ | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
+| _Contact Object sourceIP_ | Not applicable | Link will come from one of these IPs | Not applicable | Not applicable |
### Downlink
-| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
-|--|-|--|-|--|
-| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Not applicable | Routable IP from delegated subnet |
-| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable | Unique port in 49152-65535 |
-| **Output** | | | | |
-| Contact Object destinationIP | Connect to this IP | Not applicable | Not applicable | Not applicable |
-| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs |
+| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
+|:-|:|:-|:|:|
+| _Contact Profile Link ipAddress_ | Blank | Routable IP from delegated subnet | Not applicable | Routable IP from delegated subnet
+| _Contact Profile Link port_ | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable | Unique port in 49152-65535 |
+| **Output** | | | | |
+| _Contact Object destinationIP_ | Connect to this IP | Not applicable | Not applicable | Not applicable |
+| _Contact Object sourceIP_ | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs |
## Next steps
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a list of high-level features and capabilities comp
| **Logical Replication** | | | | Support for logical decoding | Yes | Yes | | Support for native logical replication | No | Yes |
-| Support for PgLogical extension | No | Yes |
+| Support for pglogical extension | No | Yes |
| Support logical replication with HA | N/A | [Limited](concepts-high-availability.md#high-availabilitylimitations) | | **Disaster Recovery** | | | | Cross region DR | Using read replicas, geo-redundant backup | Using read replicas, Geo-redundant backup (in [selected regions](overview.md#azure-regions)) |
The following table provides a list of high-level features and capabilities comp
| Ability to restore to a different region | Yes (Geo-redundant) | Yes (in [selected regions](overview.md#azure-regions)) | | Ability to restore a deleted server | Limited via API | Limited via support ticket | | **Read Replica** | | |
-| Support for read replicas | Yes | Yes (Preview) |
+| Support for read replicas | Yes | Yes |
| Number of read replicas | 5 | 5 | | Mode of replication | Async | Async | | Cross-region support | Yes | Yes |
The following table provides a list of high-level features and capabilities comp
| Traffic | Active connections, Network In, Network out | Active connections, Max. used transaction ID, Network In, Network Out, succeeded connections | | **Extensions** | | (offers latest versions)| | TimescaleDB, orafce | Yes | Yes |
-| PgCron, lo, pglogical | No | Yes |
+| pg_cron, lo, pglogical | No | Yes |
| pgAudit | Preview | Yes | | **Security** | | |
-| Azure Active Directory Support(AAD) | Yes | Yes |
-| Customer managed encryption key(BYOK) | Yes | Yes |
+| Azure Active Directory Support (AAD) | Yes | Yes |
+| Customer managed encryption key (BYOK) | Yes | Yes |
| SCRAM Authentication (SHA-256) | No | Yes | | Secure Sockets Layer support (SSL) | Yes | Yes | | **Other features** | | |
The following table provides a list of high-level features and capabilities comp
| Resource health | Yes | Yes | | Service health | Yes | Yes | | Performance insights (iPerf) | Yes | Yes. Not available in portal |
-| Major version upgrades support | No | Preview |
+| Major version upgrades support | No | Yes (Preview) |
| Minor version upgrades | Yes. Automatic during maintenance window | Yes. Automatic during maintenance window |
postgresql How To Cost Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md
This article provides a list of recommendations for optimizing Azure Postgres Fl
## 1. Use reserved capacity pricing
-Azure Postgres reserved capacity pricing allows committing to a specific capacity for 1-3 years, saving costs for customers using Azure Database for PostgreSQL service. The cost savings compared to pay-as-you-go pricing can be significant, depending on the amount of capacity reserved and the length of the term. Customers can purchase reserved capacity in increments of vCores and storage. Reserved capacity can cover costs for Azure Database for PostgreSQL servers in the same region, applied to the customer's Azure subscription. Reserved Pricing for Azure Postgres Flexible Server offers cost savings up to 40% for 1 year and up to 60% for 3-year commitments, for customers who reserve capacity. For more details, please refer Pricing Calculator | Microsoft Azure
-
+Azure Postgres reserved capacity pricing allows committing to a specific capacity for **1-3** **years**, saving costs for customers using Azure Database for PostgreSQL service. The cost savings compared to pay-as-you-go pricing can be significant, depending on the amount of capacity reserved and the length of the term. Customers can purchase reserved capacity in increments of vCores and storage. Reserved capacity can cover costs for Azure Database for PostgreSQL servers in the same region, applied to the customer's Azure subscription. Reserved Pricing for Azure Postgres Flexible Server offers cost savings up to 40% for 1 year and up to 60% for 3-year commitments, for customers who reserve capacity. For more details, please refer Pricing Calculator | Microsoft Azure
To learn more, refer [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md) ## 2. Scale compute up/down
To learn more about cost optimization, see:
* [Overview of the cost optimization pillar](/azure/architecture/framework/cost/overview) * [Tradeoffs for cost](/azure/architecture/framework/cost/tradeoffs) * [Checklist - Optimize cost](/azure/architecture/framework/cost/optimize-checklist)
-* [Checklist - Monitor cost](/azure/architecture/framework/cost/monitor-checklist)
+* [Checklist - Monitor cost](/azure/architecture/framework/cost/monitor-checklist)
+
postgresql How To Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshooting-guides.md
The table below provides information on the required log categories for each tro
| Autovacuum Blockers | PostgreSQL Sessions, PostgreSQL Database Remaining Transactions | N/A | N/A | N/A | | Autovacuum Monitoring | PostgreSQL Server Logs, PostgreSQL Tables Statistics, PostgreSQL Database Remaining Transactions | N/A | N/A | log_autovacuum_min_duration | | High CPU Usage | PostgreSQL Server Logs, PostgreSQL Sessions, AllMetrics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
-| High IOPS Usage | PostgreSQL Query Store Runtime, PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Wait Statistics | pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | N/A |
+| High IOPS Usage | PostgreSQL Query Store Runtime, PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Wait Statistics | pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | track_io_timing to ON |
| High Memory Usage | PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Runtime | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A | | High Temporary Files | PostgreSQL Sessions, PostgreSQL Query Store Runtime, PostgreSQL Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
vCore size flexibility helps you scale up or down within a performance tier and
## How to view reserved instance purchase details
-You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations). For more information, see [How a reservation discount is applied to Azure Database for PostgreSQL](../../cost-management-billing/reservations/understand-reservation-charges-postgresql.md).
+You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations).
## Reserved instance expiration
-You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Instances for Azure Database for PostgreSQL](../../cost-management-billing/reservations/understand-reservation-charges-postgresql.md).
+You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate.
## Need help? Contact us
To learn more about Azure Reservations, see the following articles:
* [What are Azure Reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md)? * [Manage Azure Reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md) * [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md)
-* [Understand reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reservation-charges-postgresql.md)
* [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) * [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
You can input all the settings on this page before selecting **Apply** at the bo
2. Create virtual networks representing the following interfaces (which you allocated subnets and IP addresses for in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses)): - Control plane access interface - User plane access interface
- - User plane data interface(s)
+ - User plane data interface(s)
+ You can name these networks yourself, but the name **must** match what you configure in the Azure portal when deploying Azure Private 5G Core. For example, you can use the names **N2**, **N3** and **N6-DN1**, **N6-DN2**, **N6-DN3** (for a 5G deployment with multiple data networks (DNs); just **N6** for a single DN deployment). You can optionally configure each virtual network with a virtual local area network identifier (VLAN ID) to enable layer 2 traffic separation. The following example is for a 5G multi-DN deployment without VLANs. :::zone pivot="ase-pro-2" 3. Carry out the following procedure three times, plus once for each of the supplementary data networks (so five times in total if you have three data networks):
You can input all the settings on this page before selecting **Apply** at the bo
- **Virtual switch**: select **vswitch-port3** for N2 and N3, and select **vswitch-port4** for N6-DN1, N6-DN2, and N6-DN3. - **Name**: *N2*, *N3*, *N6-DN1*, *N6-DN2*, or *N6-DN3*. - **VLAN**: 0
- - **Subnet mask** and **Gateway** must match the external values for the port.
+ - **Subnet mask** and **Gateway**: Use the correct subnet mask and gateway for the IP address configured on the ASE port (even if the gateway is not set on the ASE port itself).
- For example, *255.255.255.0* and *10.232.44.1* - If there's no gateway between the access interface and gNB/RAN, use the gNB/RAN IP address as the gateway address. If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway.
+ - **DNS server** and **DNS suffix** should be left blank.
1. Select **Modify** to save the configuration for this virtual network. 1. Select **Apply** at the bottom of the page and wait for the notification (a bell icon) to confirm that the settings have been applied. Applying the settings will take approximately 15 minutes. The page should now look like the following image:
You can input all the settings on this page before selecting **Apply** at the bo
:::image type="content" source="media/commission-cluster/commission-cluster-advanced-networking-ase-2.png" alt-text="Screenshot showing Advanced networking, with a table of virtual switch information and a table of virtual network information."::: :::zone-end :::zone pivot="ase-pro-gpu"+ 3. Carry out the following procedure three times, plus once for each of the supplementary data networks (so five times in total if you have three data networks): 1. Select **Add virtual network** and fill in the side panel:
- - **Virtual switch**: select **vswitch-port5** for N2 and N3, and select **vswitch-port6** for N6-DN1, N6-DN2, and N6-DN3.
- - **Name**: *N2*, *N3*, *N6-DN1*, *N6-DN2*, or *N6-DN3*.
- - **VLAN**: VLAN ID, or 0 if not using VLANs
- - **Subnet mask** and **Gateway** must match the external values for the port.
+ - **Virtual switch**: select **vswitch-port5** for N2 and N3, and select **vswitch-port6** for N6-DN1, N6-DN2, and N6-DN3.
+ - **Name**: *N2*, *N3*, *N6-DN1*, *N6-DN2*, or *N6-DN3*.
+ - **VLAN**: VLAN ID, or 0 if not using VLANs
+ - **Subnet mask** and **Gateway** must match the external values for the port.
- For example, *255.255.255.0* and *10.232.44.1* - If there's no gateway between the access interface and gNB/RAN, use the gNB/RAN IP address as the gateway address. If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway.
+ - **DNS server** and **DNS suffix** should be left blank.
1. Select **Modify** to save the configuration for this virtual network. 1. Select **Apply** at the bottom of the page and wait for the notification (a bell icon) to confirm that the settings have been applied. Applying the settings will take approximately 15 minutes. The page should now look like the following image:
You can input all the settings on this page before selecting **Apply** at the bo
In the local Azure Stack Edge UI, go to the **Kubernetes (Preview)** page. You'll set up all of the configuration and then apply it once, as you did in [Set up Advanced Networking](#set-up-advanced-networking). 1. Under **Compute virtual switch**, select **Modify**.
- 1. Select the management vswitch (for example, *vswitch-port3*)
+ 1. Select the management vswitch (for example, *vswitch-port2*)
1. Enter six IP addresses in a range for the node IP addresses on the management network. 1. Enter one IP address in a range for the service IP address, also on the management network. 1. Select **Modify** at the bottom of the panel to save the configuration.
The page should now look like the following image:
:::zone pivot="ase-pro-gpu" :::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview-enabled.png" alt-text="Screenshot showing Kubernetes (Preview) with two tables. The first table is called Compute virtual switch and the second is called Virtual network. A green tick shows that the virtual networks are enabled for Kubernetes."::: :::zone-end+ ## Start the cluster and set up Arc Access the Azure portal and go to the **Azure Stack Edge** resource created in the Azure portal.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
You must set these up in addition to the [ports required for Azure Stack Edge (A
|TCP 31000 (HTTPS)|In|LAN|Yes|Required for Kubernetes dashboard to monitor your device.| |TCP 6443 (HTTPS)|In|LAN|Yes|Required for kubectl access| - ### Outbound firewall ports required Review and apply the firewall recommendations for the following
Do the following for each site you want to add to your private mobile network. D
| 2. | Order and prepare your Azure Stack Edge Pro 2 device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-prep.md) | | 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 2 - management</br>- Port 3 - access network</br>- Port 4 - data networks| [Tutorial: Install Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-install?pivots=single-node.md) | | 4. | Connect to your Azure Stack Edge Pro 2 device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro 2 device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
+| 5. | Configure the network for your Azure Stack Edge Pro 2 device. </br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md) | | 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates?pivots=single-node) | | 8. | Activate your Azure Stack Edge Pro 2 device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-activate.md) |
Do the following for each site you want to add to your private mobile network. D
| 2. | Order and prepare your Azure Stack Edge Pro GPU device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md) | | 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install?pivots=single-node.md) | | 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
+| 5. | Configure the network for your Azure Stack Edge Pro device.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) | | 7. | Configure certificates for your Azure Stack Edge Pro GPU device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates?pivots=single-node.md) | | 8. | Activate your Azure Stack Edge Pro GPU device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-scans-and-ingestion.md
Previously updated : 03/13/2023 Last updated : 04/20/2023
If the toggle button is turned on, the new assets under a certain parent will be
> * For any scans created or scheduled before the toggle button is introduced, the toggle state is set as on and canΓÇÖt be changed. For any scans created or scheduled after the toggle button is introduced, the toggle state canΓÇÖt be changed after the scan is saved. You need to create a new scan to change the toggle state. > * When the toggle button is turned off, for sources of storage type like Azure Data Lake Storage Gen 2 it may take up to 4 hours before the [browse by source type](how-to-browse-catalog.md#browse-by-source-type) experience becomes fully available after your scan job is completed.
-### Known limitations
+#### Known limitations
When the toggle button is turned off: * The file entities under a partially selected parent will not be scanned. * If all existing entities under a parent are explicitly selected, the parent will be considered as fully selected and any new assets under the parent will be included when you run the scan again.
To keep deleted files out of your catalog, it's important to run regular scans.
When you enumerate large data stores like Data Lake Storage Gen2, there are multiple ways (including enumeration errors and dropped events) to miss information. A particular scan might miss that a file was created or deleted. So, unless the catalog is certain a file was deleted, it won't delete it from the catalog. This strategy means there can be errors when a file that doesn't exist in the scanned data store still exists in the catalog. In some cases, a data store might need to be scanned two or three times before it catches certain deleted assets. > [!NOTE]
-> Assets that are marked for deletion are deleted after a successful scan. Deleted assets might continue to be visible in your catalog for some time before they are processed and removed.
+> - Assets that are marked for deletion are deleted after a successful scan. Deleted assets might continue to be visible in your catalog for some time before they are processed and removed.
+> - Currently, source deletion detection is not supported for the following sources: Azure Databricks, Cassandra, DB2, Erwin, Google BigQuery, Hive Metastore, Looker, MongoDB, MySQL, Oracle, PostgreSQL, Salesforce, SAP BW, SAP ECC, SAP HANA, SAP S/4HANA, Snowflake, and Teradata. When object is deleted from the data source, the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
## Ingestion
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Availability zone support is a property of the App Service plan. The following a
- UK South - West Europe - West US 2
- - West US 3
+ - West US 3
+ - Azure China - China North 3
- To see which regions support availability zones for App Service Environment v3, see [Regions](../app-service/environment/overview.md#regions).
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Data Factory](../data-factory/concepts-data-redundancy.md?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json)| [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Database for PostgreSQL - Flexible Server](../postgresql/single-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Data Manager for Energy](reliability-energy-data-services.md)
+[Azure Data Manager for Energy](reliability-energy-data-services.md) |
[Azure DDoS Protection](../ddos-protection/ddos-faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Disk Encryption](../virtual-machines/disks-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure DNS - Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
You can move resources across regions in the Resource Mover hub or from within a
Using Resource Mover, you can currently move the following resources across regions: -- Azure VMs and associated disks
+- Azure VMs and associated disks (Azure Spot VMs are not currently supported)
- Encrypted Azure VMs and associated disks. This includes VMs with Azure disk encryption enabled and Azure VMs using default server-side encryption (both with platform-managed keys and customer-managed keys) - NICs - Availability sets
role-based-access-control Custom Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles.md
Previously updated : 04/05/2023 Last updated : 04/20/2023 # Azure custom roles
-If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group (in preview only), subscription, and resource group scopes.
+If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.
Custom roles can be shared between subscriptions that trust the same Azure AD tenant. There is a limit of **5,000** custom roles per tenant. (For Azure China 21Vianet, the limit is 2,000 custom roles.) Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API.
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
For pricing details, see [Azure Route Server pricing](https://azure.microsoft.co
For service level agreement details, see [SLA for Azure Route Server](https://azure.microsoft.com/support/legal/sla/route-server/v1_0/).
-## FAQs
+## FAQ
-For frequently asked questions about Azure Route Server, see [Azure Route Server FAQs](route-server-faq.md).
+For frequently asked questions about Azure Route Server, see [Azure Route Server FAQ](route-server-faq.md).
## Next steps
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
description: Define the SAP system properties for the SAP on Azure Deployment Au
Previously updated : 05/03/2022 Last updated : 04/21/2023
# Configure SAP system parameters
-Configuration for the [SAP on Azure Deployment Automation Framework](deployment-framework.md)] happens through parameters files. You provide information about your SAP system properties in a tfvars file, which the automation framework uses for deployment. You can find examples of the variable file in the 'samples/WORKSPACES/SYSTEM' folder.
+Configuration for the [SAP on Azure Deployment Automation Framework](deployment-framework.md)] happens through parameters files. You provide information about your SAP system infrastructure in a tfvars file, which the automation framework uses for deployment. You can find examples of the variable file in the 'samples' repository.
The automation supports both creating resources (green field deployment) or using existing resources (brownfield deployment).
To configure this topology, define the database tier values and define `scs_serv
### High Availability
-The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters.
+The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters or in case of Windows with Windows Failover clustering.
To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count = 1` and `scs_high_availability` = true and `application_server_count` >= 1
The database tier defines the infrastructure for the database tier, supported da
> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet). | Optional | | > | `database_vm_image` | Defines the Virtual machine image to use, see below. | Optional | | > | `database_vm_authentication_type` | Defines the authentication type (key/password). | Optional | |
-> | `database_no_avset` | Controls if the database virtual machines are deployed without availability sets. | Optional | default is false |
-> | `database_no_ppg` | Controls if the database servers will not be placed in a proximity placement group. | Optional | default is false |
-> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs. | Optional | Primarily used together with ANF pinning|
+> | `database_use_avset` | Controls if the database servers are placed in availability sets. | Optional | default is false |
+> | `database_use_ppg` | Controls if the database servers will be placed in proximity placement groups. | Optional | default is true |
+> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs. | Optional | Primarily used with ANF pinning |
> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces. | Optional | default is true | The Virtual Machine and the operating system image is defined using the following structure:
The Virtual Machine and the operating system image is defined using the followin
```python { os_type="linux"
+ type="marketplace"
source_image_id="" publisher="SUSE" offer="sles-sap-15-sp3"
The application tier defines the infrastructure for the application tier, which
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | -- | -| |
-> | `scs_server_count` | Defines the number of SCS servers. | Required | |
-> | `scs_high_availability` | Defines if the Central Services is highly available. | Optional | See [High availability configuration](configure-system.md#high-availability-configuration) |
-> | `scs_instance_number` | The instance number of SCS. | Optional | |
-> | `ers_instance_number` | The instance number of ERS. | Optional | |
-> | `scs_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
-> | `scs_server_image` | Defines the Virtual machine image to use. | Required | |
-> | `scs_server_zones` | Defines the availability zones of the SCS servers. | Optional | |
-> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet). | Optional | |
-> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet). | Optional | |
-> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet). | Optional | |
-> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet). | Optional | |
-> | `scs_server_no_ppg` | Controls SCS server proximity placement group. | Optional | |
-> | `scs_server_no_avset` | Controls SCS server availability set placement. | Optional | |
-> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers. | Optional | |
+> | Variable | Description | Type | Notes |
+> | -- | | -| |
+> | `scs_server_count` | Defines the number of SCS servers. | Required | |
+> | `scs_high_availability` | Defines if the Central Services is highly available. | Optional | See [High availability configuration](configure-system.md#high-availability-configuration) |
+> | `scs_instance_number` | The instance number of SCS. | Optional | |
+> | `ers_instance_number` | The instance number of ERS. | Optional | |
+> | `scs_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
+> | `scs_server_image` | Defines the Virtual machine image to use. | Required | |
+> | `scs_server_zones` | Defines the availability zones of the SCS servers. | Optional | |
+> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet). | Optional | |
+> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet). | Optional | |
+> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet). | Optional | |
+> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet). | Optional | |
+> | `scs_server_use_ppg` | Controls if the SCS servers are placed in availability sets. | Optional | |
+> | `scs_server_use_avset` | Controls if the SCS servers will be placed in proximity placement groups.| Optional | |
+> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers. | Optional | |
### Application server parameters
The application tier defines the infrastructure for the application tier, which
> | `application_server_count` | Defines the number of application servers. | Required | | > | `application_server_sku` | Defines the Virtual machine SKU to use. | Optional | | > | `application_server_image` | Defines the Virtual machine image to use. | Required | |
-> | `application_server_zones` | Defines the availability zones to which the application servers are deployed. | Optional | |
-> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet). | Optional | |
-> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet). | Optional | |
-> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet). | Optional | |
-> | `application_server_no_ppg` | Controls application server proximity placement group. | Optional | |
-> | `application_server_no_avset` | Controls application server availability set placement. | Optional | |
-> | `application_server_tags` | Defines a list of tags to be applied to the application servers. | Optional | |
+> | `application_server_zones` | Defines the availability zones to which the application servers are deployed.| Optional | |
+> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet). | Optional | |
+> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet). | Optional | |
+> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet). | Optional | |
+> | `application_server_use_ppg` | Controls if application servers are placed in availability sets. | Optional | |
+> | `application_server_use_avset` | Controls if application servers will be placed in proximity placement | Optional | |
+> | `application_server_tags` | Defines a list of tags to be applied to the application servers. | Optional | |
### Web dispatcher parameters
The application tier defines the infrastructure for the application tier, which
> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app/web subnet). | Optional | | > | `webdispatcher_server_nic_secondary_ips[]` | List of secondary IP addresses for the web dispatcher server (app/web subnet). | Optional | | > | `webdispatcher_server_app_admin_nic_ips` | List of IP addresses for the web dispatcher server (admin subnet). | Optional | |
-> | `webdispatcher_server_no_ppg` | Controls web proximity placement group placement. | Optional | |
-> | `webdispatcher_server_no_avset` | Defines web dispatcher availability set placement. | Optional | |
+> | `webdispatcher_server_use_ppg` | Controls if web dispatchers are placed in availability sets. | Optional | |
+> | `webdispatcher_server_use_avset` | Controls if web dispatchers will be placed in proximity placement | Optional | |
> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers. | Optional | | ## Network parameters
The table below contains the networking parameters.
> | Variable | Description | Type | Notes | > | -- | -- | | - | > | `network_logical_name` | The logical name of the network. | Required | |
-> | | | Optional | |
+> | | | | |
> | `admin_subnet_name` | The name of the 'admin' subnet. | Optional | | > | `admin_subnet_address_prefix` | The address range for the 'admin' subnet. | Mandatory | For green field deployments. | > | `admin_subnet_arm_id` * | The Azure resource identifier for the 'admin' subnet. | Mandatory | For brown field deployments. |
The table below contains the parameters related to the anchor virtual machine.
The Virtual Machine and the operating system image is defined using the following structure: ```python {
- os_type="linux"
- source_image_id=""
- publisher="SUSE"
- offer="sles-sap-15-sp3"
- sku="gen2"
- version="latest"
+ os_type = "linux"
+ type = "marketplace"
+ source_image_id = ""
+ publisher = "SUSE"
+ offer = "sles-sap-15-sp3"
+ sku = "gen2"
+ version= " latest"
} ```
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Title: Get started with SAP and Azure integration scenarios
+ Title: Get started with SAP and Microsoft integration scenarios
description: Learn about the various integration points in the Microsoft ecosystem for SAP workloads.
-# Get started with SAP and Azure integration scenarios
+# Get started with SAP and Microsoft integration scenarios
[According to SAP over 87% of total global commerce is generated by SAP customers](https://www.sap.com/documents/2017/04/4666ecdd-b67c-0010-82c7-eda71af511fa.html) and more SAP systems are running in the cloud each year. The SAP platform provides a foundation for innovation for many companies and can handle various workloads natively. Explore our integration section further to learn how you can combine the Microsoft Azure ecosystem with your SAP workload to accelerate your business outcomes. Among the scenarios are extensions with Power Platform ("keep the ABAP core clean"), secured APIs with Azure API Management, automated business processes with Logic Apps, enriched experiences with SAP Business Technology Platform, uniform data blending dashboards with the Azure Data Platform and more.
For more information about integration with Azure Data Services, see the followi
- [SAP knowledge center for Azure Data Factory and Synapse](../../data-factory/industry-sap-overview.md) - [Track end-to-end lineage of your SAP data with Microsoft Purview](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/of-kings-amp-queens-and-sap-scans-to-identify-the-right-lineage/ba-p/3268816) - [Replicating SAP data using the CDC connector](../../data-factory/sap-change-data-capture-introduction-architecture.md)
+- [SAP CDC Connector and SLT - Blog series - Part 1](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-cdc-connector-and-slt-part-1-overview-and-architecture/ba-p/3775190)
- [Replicating SAP data using the OData connector with Synapse Pipelines](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/extracting-sap-data-using-odata-part-1-the-first-extraction/ba-p/2841635) - [Use SAP HANA in Power BI Desktop](/power-bi/desktop-sap-hana) - [DirectQuery and SAP HANA](/power-bi/desktop-directquery-sap-hana)
search Cognitive Search Incremental Indexing Conceptual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-incremental-indexing-conceptual.md
Previously updated : 01/31/2023 Last updated : 04/21/2023 # Incremental enrichment and caching in Azure Cognitive Search
If you know that a change to the skill is indeed superficial, you should overrid
Setting this parameter ensures that only updates to the skillset definition are committed and the change isn't evaluated for effects on the existing cache. Use a preview API version, 2020-06-30-Preview or later. ```http
-PUT https://[servicename].search.windows.net/skillsets/[skillset name]?api-version=2020-06-30-Preview
- {
- "disableCacheReprocessingChangeDetection" : true
- }
+PUT https://[servicename].search.windows.net/skillsets/[skillset name]?api-version=2020-06-30-Preview&disableCacheReprocessingChangeDetection
+
``` <a name="Bypass-data-source-check"></a> ### Bypass data source validation checks
-Most changes to a data source definition will invalidate the cache. However, for scenarios where you know that a change should not invalidate the cache - such as changing a connection string or rotating the key on the storage account - append the "ignoreResetRequirement" parameter on the data source update. Setting this parameter to true allows the commit to go through, without triggering a reset condition that would result in all objects being rebuilt and populated from scratch.
+Most changes to a data source definition will invalidate the cache. However, for scenarios where you know that a change should not invalidate the cache - such as changing a connection string or rotating the key on the storage account - append the "ignoreResetRequirement" parameter on the [data source update](/rest/api/searchservice/update-data-source). Setting this parameter to true allows the commit to go through, without triggering a reset condition that would result in all objects being rebuilt and populated from scratch.
```http
-PUT https://[search service].search.windows.net/datasources/[data source name]?api-version=2020-06-30-Preview
- {
- "ignoreResetRequirement" : true
- }
+PUT https://[search service].search.windows.net/datasources/[data source name]?api-version=2020-06-30-Preview&ignoreResetRequirement
+
``` <a name="Force-skillset-evaluation"></a>
REST API version `2020-06-30-Preview` or later provides incremental enrichment t
+ [Update Data Source](/rest/api/searchservice/update-data-source), when called with a preview API version, provides a new parameter named "ignoreResetRequirement", which should be set to true when your update action should not invalidate the cache. Use "ignoreResetRequirement" sparingly as it could lead to unintended inconsistency in your data that will not be detected easily.
+## Limitations
+
+If you are using [SharePoint indexer (Preview](search-howto-index-sharepoint-online.md), it is not recommended that the Incremental enrichment feature is used. There are conditions that may rise when indexing with this preview feature that would require to reset the indexer and invalidate the cache.
+ ## Next steps Incremental enrichment is a powerful feature that extends change tracking to skillsets and AI enrichment. Incremental enrichment enables reuse of existing processed content as you iterate over skillset design. As a next step, enable caching on your indexers. > [!div class="nextstepaction"]
-> [Enable caching for incremental enrichment](search-howto-incremental-index.md)
+> [Enable caching for incremental enrichment](search-howto-incremental-index.md)
search Cognitive Search Skill Textmerger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textmerger.md
Previously updated : 08/12/2021 Last updated : 04/20/2023 # Text Merge cognitive skill
-The **Text Merge** skill consolidates text from a collection of fields into a single field.
+The **Text Merge** skill consolidates text from an array of strings into a single field.
> [!NOTE] > This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
Previously updated : 08/25/2022 Last updated : 04/20/2023 # Tips for better performance in Azure Cognitive Search
As this hypothetical scenario illustrates, you can have configurations on lower
An important benefit of added memory is that more of the index can be cached, resulting in lower search latency, and a greater number of queries per second. With this extra power, the administrator may not need to even need to increase the replica count and could potentially pay less than by staying on the S1 service.
+### Tip: Consider alternatives to regular expression queries
+
+[Regular expression queries](query-lucene-syntax.md#bkmk_regex) or regex can be particularly expensive. While they can be very useful for complex searches, they also may require a lot of processing power to be executed, especially if the regular expression has a lot of complexity or when searching through a large amount of data. This would result in high search latency. In order to reduce the search latency, try to simplify the regular expression or break the complex query down into smaller, more manageable queries.
++ ## Next steps Review these other articles related to service performance:
search Search Query Fuzzy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-fuzzy.md
Previously updated : 06/22/2022 Last updated : 04/20/2023 # Fuzzy search to correct misspellings and typos
A match succeeds if the discrepancies are limited to two or fewer edits, where a
In Azure Cognitive Search:
-+ Fuzzy query applies to whole terms, but you can support phrases through AND constructions. For example, "Unviersty~ of~ "Wshington~" would match on "University of Washington".
++ Fuzzy query applies to whole terms. Phrases aren't supported directly but you can specify a fuzzy match on each term of a multi-part phrase through AND constructions. For example, `search=dr~ AND cleanin~`. This query expression finds matches on "dry cleaning". + The default distance of an edit is 2. A value of `~0` signifies no expansion (only the exact term is considered a match), but you could specify `~1` for one degree of difference, or one edit.
The point of this expanded example is to illustrate the clarity that hit highlig
+ [How full text search works in Azure Cognitive Search (query parsing architecture)](search-lucene-query-architecture.md) + [Search explorer](search-explorer.md) + [How to query in .NET](./search-get-started-dotnet.md)
-+ [How to query in REST](./search-get-started-powershell.md)
++ [How to query in REST](./search-get-started-powershell.md)
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
description: Certificate Authority details for Azure services that utilize x509
- Previously updated : 02/07/2023-+ Last updated : 04/21/2023
# Azure Certificate Authority details
-This article provides the details of the root and subordinate Certificate Authorities (CAs) utilized by Azure. The scope includes government and national clouds. The minimum requirements for public key encryption and signature algorithms as well as links to certificate downloads and revocation lists are provided below the CA details tables.
+This article provides the details of the root and subordinate Certificate Authorities (CAs) utilized by Azure. The scope includes government and national clouds. The minimum requirements for public key encryption and signature algorithms, links to certificate downloads and revocation lists, and information about key concepts are provided below the CA details tables. The host names for the URIs that should be added to your firewall allowlists are also provided.
+
+## Certificate Authority details
+
+Any entity trying to access Azure Active Directory (Azure AD) identity services via the TLS/SSL protocols will be presented with certificates from the CAs listed in this article. Different services may use different root or intermediate CAs. The following root and subordinate CAs are relevant to entities that use [certificate pinning](certificate-pinning.md).
-Looking for CA details specific to Azure Active Directory? See the [Certificate authorities used by Azure Active Directory](../../active-directory/fundamentals/certificate-authorities.md) article.
**How to read the certificate details:** - The Serial Number (top string in the table) contains the hexadecimal value of the certificate serial number.-- The Thumbprint (bottom string in the table) is the SHA-1 thumbprint.-- Links to download the Privacy Enhanced Mail (PEM) and Distinguished Encoding Rules (DER) are the last cell in the table.-
-## Root Certificate Authorities
-
-| Certificate Authority | Serial Number /<br>Thumbprint | Download |
-|- |- |- |
-| Baltimore CyberTrust Root | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 | [PEM](https://crt.sh/?d=76) |
-| DigiCert Global Root CA | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | [PEM](https://crt.sh/?d=853428) |
-| DigiCert Global Root G2 | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | [PEM](https://crt.sh/?d=8656329) |
-| DigiCert Global Root G3 | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | [PEM](https://crt.sh/?d=8568700) |
-| Microsoft ECC Root Certificate Authority 2017 | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | [PEM](https://crt.sh/?d=2565145421) |
-| Microsoft RSA Root Certificate Authority 2017 | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 | [PEM](https://crt.sh/?d=2565151295) |
-
-## Subordinate Certificate Authorities
-
-| Certificate Authority | Serial Number<br>Thumbprint | Downloads |
-|- |- |- |
-| DigiCert Basic RSA CN CA G2 | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | [PEM](https://crt.sh/?d=2545289014) |
-| DigiCert Cloud Services CA-1 | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 | [PEM](https://crt.sh/?d=12624881) |
-| DigiCert SHA2 Secure Server CA | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | [PEM](https://crt.sh/?d=3422153451) |
-| DigiCert TLS Hybrid ECC SHA384 2020 CA1 | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | [PEM](https://crt.sh/?d=3422153452) |
-| DigiCert TLS RSA SHA256 2020 CA1 | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | [PEM](https://crt.sh/?d=4385364571) |
-| GeoTrust Global TLS RSA4096 SHA256 2022 CA1 | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | [PEM](https://crt.sh/?d=6670931375)|
-| GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1 | 0x0c08966535b942a9735265e4f97540bc<br>2F7AA2D86056A8775796F798C481A079E538E004 | [PEM](https://crt.sh/?d=3112858728)|
-| Microsoft Azure TLS Issuing CA 01 | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | [PEM](https://crt.sh/?d=3163654574) |
-| Microsoft Azure TLS Issuing CA 01 | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 | [PEM](https://crt.sh/?d=2616326024) |
-| Microsoft Azure TLS Issuing CA 02 | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA | [PEM](https://crt.sh/?d=3163546037) |
-| Microsoft Azure TLS Issuing CA 02 | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 | [PEM](https://crt.sh/?d=2616326032) |
-| Microsoft Azure TLS Issuing CA 05 | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 | [PEM](https://crt.sh/?d=3163600408) |
-| Microsoft Azure TLS Issuing CA 05 | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 | [PEM](https://crt.sh/?d=2616326057) |
-| Microsoft Azure TLS Issuing CA 06 | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 | [PEM](https://crt.sh/?d=3163654575) |
-| Microsoft Azure TLS Issuing CA 06 | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 | [PEM](https://crt.sh/?d=2616330106) |
-| Microsoft Azure ECC TLS Issuing CA 01 | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 | [PEM](https://crt.sh/?d=3232541596) |
-| Microsoft Azure ECC TLS Issuing CA 01 | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | [PEM](https://crt.sh/?d=2616305805) |
-| Microsoft Azure ECC TLS Issuing CA 02 | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 | [PEM](https://crt.sh/?d=3232541597) |
-| Microsoft Azure ECC TLS Issuing CA 02 | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 | [PEM](https://crt.sh/?d=2616326233) |
-| Microsoft Azure ECC TLS Issuing CA 05 | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | [PEM](https://crt.sh/?d=3232541594) |
-| Microsoft Azure ECC TLS Issuing CA 05 | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 | [PEM](https://crt.sh/?d=2616326161) |
-| Microsoft Azure ECC TLS Issuing CA 06 | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 | [PEM](https://crt.sh/?d=3232541595) |
-| Microsoft Azure ECC TLS Issuing CA 06 | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 | [PEM](https://crt.sh/?d=2616326228) |
-| Microsoft RSA TLS CA 01 | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A | [PEM](https://crt.sh/?d=3124375355) |
-| Microsoft RSA TLS CA 02 | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 | [PEM](https://crt.sh/?d=3124375356) |
-| Microsoft RSA TLS Issuing AOC CA 01 |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | [PEM](https://crt.sh/?d=4789678141) |
-| Microsoft RSA TLS Issuing AOC CA 02 |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | [PEM](https://crt.sh/?d=4814787092) |
-| Microsoft RSA TLS Issuing EOC CA 01 |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 | [PEM](https://crt.sh/?d=4814787098) |
-| Microsoft RSA TLS Issuing EOC CA 02 |3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 | [PEM](https://crt.sh/?d=4814787087) |
-| Microsoft ECC TLS Issuing AOC CA 01 |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | [PEM](https://crt.sh/?d=4789656467) |
-| Microsoft ECC TLS Issuing AOC CA 02 |33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | [PEM](https://crt.sh/?d=4814787086) |
-| Microsoft ECC TLS Issuing EOC CA 01 |330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 | [PEM](https://crt.sh/?d=4814787088) |
-| Microsoft ECC TLS Issuing EOC CA 02 |330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a | [PEM](https://crt.sh/?d=4814787085) |
+- The Thumbprint (bottom string in the table) is the SHA1 thumbprint.
+- CAs listed in italics are the most recently added CAs.
+
+# [Root and Subordinate CAs list](#tab/root-and-subordinate-cas-list)
+
+### Root Certificate Authorities
+
+| Certificate Authority | Serial Number /<br>Thumbprint |
+|- |- |
+| [Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 |
+| [DigiCert Global Root CA](https://cacerts.digicert.com/DigiCertGlobalRootCA.crt) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 |
+| [DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 |
+| [DigiCert Global Root G3](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
+| [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
+| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/archived/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |
+
+### Subordinate Certificate Authorities
+
+| Certificate Authority | Serial Number<br>Thumbprint |
+|- |- |
+| [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 |
+| [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 |
+| [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C |
+| [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 |
+| [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD |
+| [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF |
+| [GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1](https://crt.sh/?d=3112858728) | 0x0c08966535b942a9735265e4f97540bc<br>2F7AA2D86056A8775796F798C481A079E538E004 |
+| [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 |
+| [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 |
+| [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |
+| [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |
+| [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 |
+| [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 |
+| [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
+| [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |
+| [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 |
+| [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 |
+| [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
+| [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 |
+| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 |
+| [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 |
+| [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
+| [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |
+| [*Microsoft ECC TLS Issuing AOC CA 01*](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 |
+| [*Microsoft ECC TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 |
+| [*Microsoft ECC TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 |
+| [*Microsoft ECC TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787085) | 0x330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a |
+| [Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A |
+| [Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |
+| [*Microsoft RSA TLS Issuing AOC CA 01*](https://crt.sh/?d=4789678141) | 0x330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee |
+| [*Microsoft RSA TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787092) | 0x3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e |
+| [*Microsoft RSA TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787098) | 0x33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 |
+| [*Microsoft RSA TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787087) | 0x3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 |
+++
+# [Certificate Authority chains](#tab/certificate-authority-chains)
+
+### Root and subordinate certificate authority chains
+
+| Certificate Authority | Serial Number<br>Thumbprint |
+|- |- |
+| [**Baltimore CyberTrust Root**](https://crt.sh/?d=76) | 020000b9<br>d4de20d05e66fc53fe1a50882c78db2852cae474 |
+| Γöö [Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A |
+| Γöö [Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |
+| [**DigiCert Global Root CA**](https://crt.sh/?d=853428) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 |
+| Γöö [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 |
+| Γöö [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 |
+| Γöö [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C |
+| Γöö [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 |
+| Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD |
+| Γöö [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF |
+| Γöö [GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1](https://crt.sh/?d=3112858728) |0x0c08966535b942a9735265e4f97540bc<br>2F7AA2D86056A8775796F798C481A079E538E004 |
+| [**DigiCert Global Root G2**](https://crt.sh/?d=8656329) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 |
+| Γöö [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 |
+| Γöö [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
+| Γöö [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 |
+| Γöö [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
+| [**DigiCert Global Root G3**](https://crt.sh/?d=8568700) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
+| [**Microsoft ECC Root Certificate Authority 2017**](https://crt.sh/?d=2565145421) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 |
+| Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |
+| Γöö [*Microsoft ECC TLS Issuing AOC CA 01*](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 |
+| Γöö [*Microsoft ECC TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787086) |33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 |
+| Γöö [*Microsoft ECC TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787088) |330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 |
+| Γöö [*Microsoft ECC TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787085) |330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a |
+| [**Microsoft RSA Root Certificate Authority 2017**](https://crt.sh/?id=2565151295) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |
+| Γöö [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 |
+| Γöö [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 |
+| Γöö [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 |
+| Γöö [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |
+| Γöö [*Microsoft RSA TLS Issuing AOC CA 01*](https://crt.sh/?d=4789678141) |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee |
+| Γöö [*Microsoft RSA TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e |
+| Γöö [*Microsoft RSA TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 |
+| Γöö [*Microsoft RSA TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787087) |3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 |
++ ## Client compatibility for public PKIs
+The CAs used by Azure are compatible with the following OS versions:
+ | Windows | Firefox | iOS | macOS | Android | Java | |:--:|:--:|:--:|:--:|:--:|:--:| | Windows XP SP3+ | Firefox 32+ | iOS 7+ | OS X Mavericks (10.9)+ | Android SDK 5.x+ | Java JRE 1.8.0_101+ |
+Review the following action steps when CAs expire or change:
+
+- Update to a supported version of the required OS.
+- If you can't change the OS version, you may need to manually update the trusted root store to include the new CAs. Refer to documentation provided by the manufacturer.
+- If your scenario includes disabling the trusted root store or running the Windows client in disconnected environments, ensure that all root CAs are included in the Trusted Root CA store and all sub CAs listed in this article are included in the Intermediate CA store.
+- Many distributions of **Linux** require you to add CAs to /etc/ssl/certs. Refer to the distributionΓÇÖs documentation.
+- Ensure that the **Java** key store contains the CAs listed in this article. For more information, see the [Java applications](#java-applications) section of this article.
+- If your application explicitly specifies a list of acceptable CAs, check to see if you need to update the pinned certificates when CAs change or expire. For more information, see [Certificate pinning](certificate-pinning.md).
+ ## Public key encryption and signature algorithms Support for the following algorithms, elliptical curves, and key sizes are required:
OCSP:
- `oneocsp.microsoft.com` - `status.geotrust.com`
+## Certificate Pinning
+
+Certificate Pinning is a security technique where only authorized, or *pinned*, certificates are accepted when establishing a secure session. Any attempt to establish a secure session using a different certificate is rejected. Learn about the history and implications of [certificate pinning](certificate-pinning.md).
+
+### How to address certificate pinning
+
+If your application explicitly specifies a list of acceptable CAs, you may periodically need to update pinned certificates when Certificate Authorities change or expire.
+
+To detect certificate pinning, we recommend the taking the following steps:
+
+- If you're an application developer, search your source code for references to certificate thumbprints, Subject Distinguished Names, Common Names, serial numbers, public keys, and other certificate properties of any of the Sub CAs involved in this change.
+ - If there's a match, update the application to include the missing CAs.
+- If you have an application that integrates with Azure APIs or other Azure services and you're unsure if it uses certificate pinning, check with the application vendor.
+
+## Java Applications
+
+To determine if the **Microsoft ECC Root Certificate Authority 2017** and **Microsoft RSA Root Certificate Authority 2017** root certificates are trusted by your Java application, you can check the list of trusted root certificates used by the Java Virtual Machine (JVM).
+
+1. Open a terminal window on your system.
+1. Run the following command:
+ ```bash
+ keytool -list -keystore $JAVA_HOME/jre/lib/security/cacerts
+ ```
+ - `$JAVA_HOME` refers to the path to the Java home directory.
+ - If you're unsure of the path, you can find it by running the following command:
+
+ ```bash
+ readlink -f $(which java) | xargs dirname | xargs dirname
+ ```
+
+1. Look for the **Microsoft RSA Root Certificate Authority 2017** in the output. It should look something like this:
+ - If the **Microsoft ECC Root Certificate Authority 2017** and **Microsoft RSA Root Certificate Authority 2017** root certificates are trusted, they should appear in the list of trusted root certificates used by the JVM.
+ - If it's not in the list, you'll need to add it.
+ - The output should look like the following sample:
+
+ ```bash
+ ...
+ Microsoft ECC Root Certificate Authority 2017, 20-Aug-2022, Root CA,
+ Microsoft RSA Root Certificate Authority 2017, 20-Aug-2022, Root CA,
+ ...
+ ```
++
+1. To add a root certificate to the trusted root certificate store in Java, you can use the `keytool` utility. The following example adds the **Microsoft RSA Root Certificate Authority 2017** root certificate:
+ ```bash
+ keytool -import -file microsoft-ecc-root-ca.crt -alias microsoft-rsa-root-ca -keystore $JAVA_HOME/jre/lib/security/cacerts
+ keytool -import -file microsoft-rsa-root-ca.crt -alias microsoft-rsa-root-ca -keystore $JAVA_HOME/jre/lib/security/cacerts
+ ```
+ > [!NOTE]
+ > In this example, `microsoft-ecc-root-ca.crt` and `microsoft-rsa-root-ca.crt` are the names of the files that contain the **Microsoft ECC Root Certificate Authority 2017** and **Microsoft RSA Root Certificate Authority 2017** root certificates, respectively.
+ ## Past changes The C) for additional information.
-Microsoft updated Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs) on February 15, 2021, to comply with changes set forth by the C) for additional information.
+Microsoft updated Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs) on February 15, 2021, to comply with changes set forth by the C) for additional information.
### Article change log
-February 7, 2023: Added 8 new subordinate Certificate Authorities
+- February 7, 2023: Added eight new subordinate Certificate Authorities
+- March 1, 2023: Provided timelines for upcoming sub CA expiration
## Next steps
security Certificate Pinning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/certificate-pinning.md
+
+ Title: Certificate pinning
+
+description: Information about the history, usage, and risks of certificate pinning.
++++ Last updated : 04/11/2023++++++
+# What is Certificate pinning?
+
+Certificate Pinning is a security technique where only authorized, or *pinned*, certificates are accepted when establishing a secure session. Any attempt to establish a secure session using a different certificate is rejected.
+
+## Certificate pinning history
+Certificate pinning was originally devised as a means of thwarting Man-in-the-Middle (MITM) attacks. Certificate pinning first became popular in 2011 as the result of the DigiNotar Certificate Authority (CA) compromise, where an attacker was able to create wildcard certificates for several high-profile websites including Google. Chrome was updated to "pin" the current certificates for Google's websites and would reject any connection if a different certificate was presented. Even if an attacker found a way to convince a CA into issuing a fraudulent certificate, it would still be recognized by Chrome as invalid, and the connection rejected.
+
+Though web browsers such as Chrome and Firefox were among the first applications to implement this technique, the range of use cases rapidly expanded. Internet of Things (IoT) devices, iOS and Android mobile apps, and a disparate collection of software applications began using this technique to defend against Man-in-the-Middle attacks.
+
+For several years, certificate pinning was considered good security practice. Oversight over the public Public Key Infrastructure (PKI) landscape has improved with transparency into issuance practices of publicly trusted CAs.
+
+## How to address certificate pinning in your application
+
+Typically, an application contains a list of authorized certificates or properties of certificates including Subject Distinguished Names, thumbprints, serial numbers, and public keys. Applications may pin against individual leaf or end-entity certificates, subordinate CA certificates, or even Root CA certificates.
+
+If your application explicitly specifies a list of acceptable CAs, you may periodically need to update pinned certificates when Certificate Authorities change or expire. To detect certificate pinning, we recommend the taking the following steps:
+
+- If you're an application developer, search your source code for any of the following references for the CA that is changing or expiring. If there's a match, update the application to include the missing CAs.
+ - Certificate thumbprints
+ - Subject Distinguished Names
+ - Common Names
+ - Serial numbers
+ - Public keys
+ - Other certificate properties
+
+- If you have an application that integrates with Azure APIs or other Azure services and you're unsure if it uses certificate pinning, check with the application vendor.
+
+## Certificate pinning limitations
+The practice of certificate pinning has become widely disputed as it carries unacceptable certificate agility costs. One specific implementation, HTTP Public Key Pinning (HPKP), has been deprecated altogether
+
+As there's no single web standard for how certificate pinning is performed, we can't offer direct guidance in detecting its usage. While we don't recommend against certificate pinning, customers should be aware of the limitations this practice creates if they choose to use it.
+
+- Ensure that the pinned certificates can be updated on short notice.
+- Industry requirements, such as the [CA/Browser ForumΓÇÖs Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates](https://cabforum.org/about-the-baseline-requirements/) require rotating and revoking certificates in as little as 24 hours in certain situations.
+
+## Next steps
+
+- [Check the Azure Certificate Authority details for upcoming changes](azure-CA-details.md)
+- [Review the Azure Security Fundamentals best practices and patterns](best-practices-and-patterns.md)
security Ocsp Sha 1 Sunset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ocsp-sha-1-sunset.md
Previously updated : 08/24/2022 Last updated : 04/11/2023
# Sunset for SHA-1 Online Certificate Standard Protocol signing
+> [!IMPORTANT]
+> This article was published concurrent with the change described, and is not being updated. For up-to-date information about CAs, see [Azure Certificate Authority details](azure-ca-details.md).
+ Microsoft is updating the Online Certificate Standard Protocol (OCSP) service to comply with a recent change to the [Certificate Authority / Browser Forum (CA/B Forum)](https://cabforum.org/) Baseline Requirements. This change requires that all publicly-trusted Public Key Infrastructures (PKIs) end usage of the SHA-1 hash algorithms for OCSP responses by May 31, 2022. Microsoft leverages certificates from multiple PKIs to secure its services. Many of those certificates already use OCSP responses that use the SHA-256 hash algorithm. This change brings all remaining PKIs used by Microsoft into compliance with this new requirement.
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
Previously updated : 06/30/2022 Last updated : 04/11/2023 # Create a codeless connector for Microsoft Sentinel (Public preview)
-> [!IMPORTANT]
-> The Codeless Connector Platform (CCP) is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
- The Codeless Connector Platform (CCP) provides partners, advanced users, and developers with the ability to create custom connectors, connect them, and ingest data to Microsoft Sentinel. Connectors created via the CCP can be deployed via API, an ARM template, or as a solution in the Microsoft Sentinel [content hub](sentinel-solutions.md). Connectors created using CCP are fully SaaS, without any requirements for service installations, and also include [health monitoring](monitor-data-connector-health.md) and full support from Microsoft Sentinel.
-Create your data connector by defining a JSON configuration file, with settings for how the data connector page in Microsoft Sentinel looks and works and polling settings that define how the connection works between Microsoft Sentinel and your data source.
+Create your data connector by defining JSON configurations, with settings for how the data connector page in Microsoft Sentinel looks along with polling settings that define how the connection functions.
+
+> [!IMPORTANT]
+> The Codeless Connector Platform (CCP) is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
**Use the following steps to create your CCP connector and connect to your data source from Microsoft Sentinel**:
Create your data connector by defining a JSON configuration file, with settings
> * Deploy your connector to your Microsoft Sentinel workspace > * Connect Microsoft Sentinel to your data source and start ingesting data
-This article describes the syntax used in the CCP JSON configuration file and procedures for deploying your connector via API, an ARM template, or a Microsoft Sentinel solution.
+This article describes the syntax used in the CCP JSON configurations and procedures for deploying your connector via API, an ARM template, or a Microsoft Sentinel solution.
## Prerequisites
-Before building a connector, we recommend that you learn and understand how your data source behaves and exactly how Microsoft Sentinel will need to connect.
+Before building a connector, we recommend that you understand how your data source behaves and exactly how Microsoft Sentinel will need to connect.
-For example, you'll need to understand the types of authentication, pagination, and API endpoints that are required for successful connections.
+For example, you'll need to know the types of authentication, pagination, and API endpoints that are required for successful connections.
## Create a connector JSON configuration file
-To create your custom, CCP connector, create a JSON file with the following basic syntax:
-
-```json
-{
- "kind": "<name>",
- "properties": {
- "connectorUiConfig": {...
- },
- "pollingConfig": {...
- }
- }
-}
-```
-
-Fill in each of the following area with additional properties that define how your connector connects Microsoft Sentinel to your data source, and is displayed in the Azure portal:
+Your custom CCP connector has two primary JSON sections needed for deployment. Fill in these areas to define how your connector is displayed in the Azure portal and how it connects Microsoft Sentinel to your data source.
- `connectorUiConfig`. Defines the visual elements and text displayed on the data connector page in Microsoft Sentinel. For more information, see [Configure your connector's user interface](#configure-your-connectors-user-interface). - `pollingConfig`. Defines how Microsoft Sentinel collects data from your data source. For more information, see [Configure your connector's polling settings](#configure-your-connectors-polling-settings).
-## Configure your connector's user interface
+Then, if you deploy your codeless connector via ARM, you'll wrap these sections in the ARM template for data connectors.
-This section describes the configuration for how the user interface on the data connector page appears in Microsoft Sentinel.
+Review [other CCP data connectors](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors#codeless-connector-platform-ccp-preview--native-microsoft-sentinel-polling) as examples or download the example template, [DataConnector_API_CCP_template.json (Preview)](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors#build-the-connector).
-Use the [properties supported](#ui-props) for the `connectorUiConfig` area of the [JSON configuration file](#create-a-connector-json-configuration-file) to configure the user interface displayed for your data connector in the Azure portal.
+## Configure your connector's user interface
+
+This section describes the configuration options available to customize the user interface of the data connector page.
-The following image shows a sample data connector page, highlighted with numbers that correspond to configurable areas of the user interface:
+The following image shows a sample data connector page, highlighted with numbers that correspond to notable areas of the user interface:
:::image type="content" source="media/create-codeless-connector/sample-data-connector-page.png" alt-text="Screenshot of a sample data connector page."::: 1. **Title**. The title displayed for your data connector.
-1. **Icon**. The icon displayed for your data connector.
-1. **Status**. Describes whether or not your data connector is connected to Microsoft Sentinel.
+1. **Logo**. The icon displayed for your data connector. Customizing this is only possible when deploying as part of a solution.
+1. **Status**. Indicates whether or not your data connector is connected to Microsoft Sentinel.
1. **Data charts**. Displays relevant queries and the amount of ingested data in the last two weeks.
-1. **Instructions tab**. Includes a **Prerequisites** section, with a list of minimal validations before the user can enable the connector, and an **Instructions**, with a list of instructions to guide the user in enabling the connector. This section can include text, buttons, forms, tables, and other common widgets to simplify the process.
+1. **Instructions tab**. Includes a **Prerequisites** section, with a list of minimal validations before the user can enable the connector, and **Instructions**, to guide the user enablement of the connector. This section can include text, buttons, forms, tables, and other common widgets to simplify the process.
1. **Next steps tab**. Includes useful information for understanding how to find data in the event logs, such as sample queries.
-<a name="ui-props"></a>The `connectorUiConfig` section of the configuration file includes the following properties:
+Here's the `connectorUiConfig` sections and syntax needed to configure the user interface:
+
+|Property Name |Type |Description |
+|:|:||
+|**availability** | `{`<br>`"status": 1,`<br>`"isPreview":` Boolean<br>`}` | <br> **status**: **1** Indicates connector is generally available to customers. <br>**isPreview** Indicates whether to include (Preview) suffix to connector name. |
+|**connectivityCriteria** | `{`<br>`"type": SentinelKindsV2,`<br>`"value": APIPolling`<br>`}` | An object that defines how to verify if the connector is correctly defined. Use the values indicated here.|
+|**dataTypes** | [dataTypes[]](#datatypes) | A list of all data types for your connector, and a query to fetch the time of the last event for each data type. |
+|**descriptionMarkdown** | String | A description for the connector with the ability to add markdown language to enhance it. |
+|**graphQueries** | [graphQueries[]](#graphqueries) | Queries that present data ingestion over the last two weeks in the **Data charts** pane.<br><br>Provide either one query for all of the data connector's data types, or a different query for each data type. |
+|**graphQueriesTableName** | String | Defines the name of the Log Analytics table from which data for your queries is pulled. <br><br>The table name can be any string, but must end in `_CL`. For example: `TableName_CL`|
+|**instructionsSteps** | [instructionSteps[]](#instructionsteps) | An array of widget parts that explain how to install the connector, displayed on the **Instructions** tab. |
+|**metadata** | [metadata](#metadata) | Metadata displayed under the connector description. |
+|**permissions** | [permissions[]](#permissions) | The information displayed under the **Prerequisites** section of the UI which Lists the permissions required to enable or disable the connector. |
+|**publisher** | String | This is the text shown in the **Provider** section. |
+|**sampleQueries** | [sampleQueries[]](#samplequeries) | Sample queries for the customer to understand how to find the data in the event log, to be displayed in the **Next steps** tab. |
+|**title** | String |Title displayed in the data connector page. |
+
+Putting all these pieces together is complicated. Use the [connector page user experience validation tool](#validate-the-data-connector-page-user-experience) to test out the components you put together.
+### dataTypes
-|Name |Type |Description |
+|Array Value |Type |Description |
||||
-|**id** | GUID | A distinct ID for the connector. |
-|**title** | String |Title displayed in the data connector page. |
-|**publisher** | String | Your company name. |
-|**descriptionMarkdown** | String, in markdown | A description for the connector. |
-|**additionalRequirementBanner** | String, in markdown | Text for the **Prerequisites** section of the **Instructions** tab. |
-| **graphQueriesTableName** | String | Defines the name of the Log Analytics table from which data for your queries is pulled. <br><br>The table name can be any string, but must end in `_CL`. For example: `TableName_CL`
-|**graphQueries** | [GraphQuery[]](#graphquery) | Queries that present data ingestion over the last two weeks in the **Data charts** pane.<br><br>Provide either one query for all of the data connector's data types, or a different query for each data type. |
-|**sampleQueries** | [SampleQuery[]](#samplequery) | Sample queries for the customer to understand how to find the data in the event log, to be displayed in the **Next steps** tab. |
-|**dataTypes** | [DataTypes[]](#datatypes) | A list of all data types for your connector, and a query to fetch the time of the last event for each data type. |
-|**connectivityCriteria** | [ConnectivityCriteria[]](#connectivitycriteria) |An object that defines how to verify if the connector is correctly defined. |
-|**availability** | `{`<br>` status: Number,`<br>` isPreview: Boolean`<br>`}` | One of the following values: <br><br>- **1**: Connector is generally available to customers. <br>- **isPreview**: Indicates that the connector is not yet generally available. |
-|**permissions** | [RequiredConnectorPermissions[]](#requiredconnectorpermissions) | Lists the permissions required to enable or disable the connector. |
-|**instructionsSteps** | [InstructionStep[]](#instructionstep) | An array of widget parts that explain how to install the connector, displayed on the **Instructions** tab. |
-|**metadata** | [Metadata](#metadata) | ARM template metadata, for deploying the connector as an ARM template. |
--
-### GraphQuery
+| **name** | String | A meaningful description for the`lastDataReceivedQuery`, including support for a variable. <br><br>Example: `{{graphQueriesTableName}}` |
+| **lastDataReceivedQuery** | String | A KQL query that returns one row, and indicates the last time data was received, or no data if there is no relevant data. <br><br>Example: `{{graphQueriesTableName}}\n | summarize Time = max(TimeGenerated)\n | where isnotempty(Time)` |
+
+### graphQueries
Defines a query that presents data ingestion over the last two weeks in the **Data charts** pane. Provide either one query for all of the data connector's data types, or a different query for each data type.
-|Name |Type |Description |
+|Array Value |Type |Description |
|||| |**metricName** | String | A meaningful name for your graph. <br><br>Example: `Total data received` | |**legend** | String | The string that appears in the legend to the right of the chart, including a variable reference.<br><br>Example: `{{graphQueriesTableName}}` |
-|**baseQuery** | String | The query that filters for relevant events, including a variable reference. <br><br>Example: `TableName | where ProviderName == ΓÇ£myproviderΓÇ¥` or `{{graphQueriesTableName}}` |
---
-### SampleQuery
-
-|Name |Type |Description |
-||||
-| **Description** | String | A meaningful description for the sample query.<br><br>Example: `Top 10 vulnerabilities detected` |
-| **Query** | String | Sample query used to fetch the data type's data. <br><br>Example: `{{graphQueriesTableName}}\n | sort by TimeGenerated\n | take 10` |
-
+|**baseQuery** | String | The query that filters for relevant events, including a variable reference. <br><br>Example: `TableName_CL | where ProviderName == "myprovider"` or `{{graphQueriesTableName}}` |
-### DataTypes
-
-|Name |Type |Description |
-||||
-| **dataTypeName** | String | A meaningful description for the`lastDataReceivedQuery` query, including support for a variable. <br><br>Example: `{{graphQueriesTableName}}` |
-| **lastDataReceivedQuery** | String | A query that returns one row, and indicates the last time data was received, or no data if there is no relevant data. <br><br>Example: `{{graphQueriesTableName}}\n | summarize Time = max(TimeGenerated)\n | where isnotempty(Time)`
---
-### ConnectivityCriteria
-
-|Name |Type |Description |
-||||
-| **type** | ENUM | Always define this value as `SentinelKindsV2`. |
-| **value** | deprecated |N/A |
--
-### Availability
-
-|Name |Type |Description |
-||||
-| **status** | Boolean | Determines whether or not the data connector is available in your workspace. <br><br>Example: `1`|
-| **isPreview** | Boolean |Determines whether the data connector is supported as Preview or not. <br><br>Example: `false` |
+### instructionSteps
+This section provides parameters that define the set of instructions that appear on your data connector page in Microsoft Sentinel.
-### RequiredConnectorPermissions
-
-|Name |Type |Description |
-||||
-| **tenant** | ENUM | Defines the required permissions, as one or more of the following values: `GlobalAdmin`, `SecurityAdmin`, `SecurityReader`, `InformationProtection` <br><br>Example: The **tenant** value displays displays in Microsoft Sentinel as: **Tenant Permissions: Requires `Global Administrator` or `Security Administrator` on the workspace's tenant**|
-| **licenses** | ENUM | Defines the required licenses, as one of the following values: `OfficeIRM`,`OfficeATP`, `Office365`, `AadP1P2`, `Mcas`, `Aatp`, `Mdatp`, `Mtp`, `IoT` <br><br>Example: The **licenses** value displays in Microsoft Sentinel as: **License: Required Azure AD Premium P2**|
-| **customs** | String | Describes any custom permissions required for your data connection, in the following syntax: <br>`{`<br>` name:string,`<br>` description:string`<br>`}` <br><br>Example: The **customs** value displays in Microsoft Sentinel as: **Subscription: Contributor permissions to the subscription of your IoT Hub.** |
-| **resourceProvider** | [ResourceProviderPermissions](#resourceproviderpermissions) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel as: <br>**Workspace: write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**|
--
-#### ResourceProviderPermissions
-
-|Name |Type |Description |
-||||
-| **provider** | ENUM | Describes the resource provider, with one of the following values: <br>- `Microsoft.OperationalInsights/workspaces` <br>- `Microsoft.OperationalInsights/solutions`<br>- `Microsoft.OperationalInsights/workspaces/datasources`<br>- `microsoft.aadiam/diagnosticSettings`<br>- `Microsoft.OperationalInsights/workspaces/sharedKeys`<br>- `Microsoft.Authorization/policyAssignments` |
-| **providerDisplayName** | String | A query that should return one row, indicating the last time that data was received, or no data if there is no relevant data. |
-| **permissionsDisplayText** | String | Display text for *Read*, *Write*, or *Read and Write* permissions. |
-| **requiredPermissions** | [RequiredPermissionSet](#requiredpermissionset) | Describes the minimum permissions required for the connector as one of the following values: `read`, `write`, `delete`, `action` |
-| **Scope** | ENUM | Describes the scope of the data connector, as one of the following values: `Subscription`, `ResourceGroup`, `Workspace` |
--
-### RequiredPermissionSet
-
-|Name |Type |Description |
+|Array Property |Type |Description |
||||
-|**read** | boolean | Determines whether *read* permissions are required. |
-| **write** | boolean | Determines whether *write* permissions are required. |
-| **delete** | boolean | Determines whether *delete* permissions are required. |
-| **action** | boolean | Determines whether *action* permissions are required. |
+| **title** | String | Optional. Defines a title for your instructions. |
+| **description** | String | Optional. Defines a meaningful description for your instructions. |
+| **innerSteps** | Array | Optional. Defines an array of inner instruction steps. |
+| **instructions** | Array of [instructions](#instructions) | Required. Defines an array of instructions of a specific parameter type. |
+| **bottomBorder** | Boolean | Optional. When `true`, adds a bottom border to the instructions area on the connector page in Microsoft Sentinel |
+| **isComingSoon** | Boolean | Optional. When `true`, adds a **Coming soon** title on the connector page in Microsoft Sentinel |
+#### instructions
-### Metadata
+Displays a group of instructions, with various options as parameters and the ability to nest more instructionSteps in groups.
-This section provides metadata used when you're [deploying your data connector as an ARM template](#deploy-your-connector-in-microsoft-sentinel-and-start-ingesting-data).
+| Parameter | Array property | Description |
+|--|--|-|
+| **APIKey** | [APIKey](#apikey) | Add placeholders to your connector's JSON configuration file. |
+| **CopyableLabel** | [CopyableLabel](#copyablelabel) | Shows a text field with a copy button at the end. When the button is selected, the field's value is copied.|
+| **InfoMessage** | [InfoMessage](#infomessage) | Defines an inline information message.
+| **InstructionStepsGroup** | [InstructionStepsGroup](#instructionstepsgroup) | Displays a group of instructions, optionally expanded or collapsible, in a separate instructions section.|
+| **InstallAgent** | [InstallAgent](#installagent) | Displays a link to other portions of Azure to accomplish various installation requirements. |
-|Name |Type |Description |
-||||
-| **id** | String | Defines a GUID for your ARM template. |
-| **kind** | String | Defines the kind of ARM template you're creating. Always use `dataConnector`. |
-| **source** | String |Describes your data source, using the following syntax: <br>`{`<br>` kind:string`<br>` name:string`<br>`}`|
-| **author** | String | Describes the data connector author, using the following syntax: <br>`{`<br>` name:string`<br>`}`|
-| **support** | String | Describe the support provided for the data connector using the following syntax: <br> `{`<br>` "tier": string,`<br>` "name": string,`<br>`"email": string,`<br> `"link": string`<br>` }`|
+#### APIKey
+You may want to create a JSON configuration file template, with placeholders parameters, to reuse across multiple connectors, or even to create a connector with data that you don't currently have.
-### Instructions
+To create placeholder parameters, define an additional array named `userRequestPlaceHoldersInput` in the [Instructions](#instructions) section of your [CCP JSON configuration](#create-a-connector-json-configuration-file) file, using the following syntax:
-This section provides parameters that define the set of instructions that appear on your data connector page in Microsoft Sentinel.
+```json
+"instructions": [
+ {
+ "parameters": {
+ "enable": "true",
+ "userRequestPlaceHoldersInput": [
+ {
+ "displayText": "Organization Name",
+ "requestObjectKey": "apiEndpoint",
+ "placeHolderName": "{{placeHolder}}"
+ }
+ ]
+ },
+ "type": "APIKey"
+ }
+ ]
+```
+The `userRequestPlaceHoldersInput` parameter includes the following attributes:
|Name |Type |Description | ||||
-| **title** | String | Optional. Defines a title for your instructions. |
-| **description** | String | Optional. Defines a meaningful description for your instructions. |
-| **innerSteps** | [InstructionStep](#instructionstep) | Optional. Defines an array of inner instruction steps. |
-| **bottomBorder** | Boolean | When `true`, adds a bottom border to the instructions area on the connector page in Microsoft Sentinel |
-| **isComingSoon** | Boolean | When `true`, adds a **Coming soon** title on the connector page in Microsoft Sentinel |
--
+|**DisplayText** | String | Defines the text box display value, which is displayed to the user when connecting. |
+|**RequestObjectKey** |String | Defines the ID in the request section of the **pollingConfig** to substitute the placeholder value with the user provided value. <br><br>If you don't use this attribute, use the `PollingKeyPaths` attribute instead. |
+|**PollingKeyPaths** |String |Defines an array of [JsonPath](https://www.npmjs.com/package/JSONPath) objects that directs the API call to anywhere in the template, to replace a placeholder value with a user value.<br><br>**Example**: `"pollingKeyPaths":["$.request.queryParameters.test1"]` <br><br>If you don't use this attribute, use the `RequestObjectKey` attribute instead. |
+|**PlaceHolderName** |String |Defines the name of the placeholder parameter in the JSON template file. This can be any unique value, such as `{{placeHolder}}`. |
#### CopyableLabel
-Shows a field with a button on the right to copy the field value. For example:
+ Example:
:::image type="content" source="media/create-codeless-connector/copy-field-value.png" alt-text="Screenshot of a copy value button in a field."::: **Sample code**: ```json
-Implementation:
-instructions: [
- new CopyableLabelInstructionModel({
- fillWith: [ΓÇ£MicrosoftAwsAccountΓÇ¥],
- label: ΓÇ£Microsoft Account IDΓÇ¥,
- }),
- new CopyableLabelInstructionModel({
- fillWith: [ΓÇ£workspaceIdΓÇ¥],
- label: ΓÇ£External ID (WorkspaceId)ΓÇ¥,
- }),
- ]
+{
+ "parameters": {
+ "fillWith": [
+ "WorkspaceId",
+ "PrimaryKey"
+ ],
+ "label": "Here are some values you'll need to proceed.",
+ "value": "Workspace is {0} and PrimaryKey is {1}"
+ },
+ "type": "CopyableLabel"
+}
```
-**Parameters**: `CopyableLabelInstructionParameters`
-
-|Name |Type |Description |
-||||
+| Array Value |Type |Description |
+|||-|
|**fillWith** | ENUM | Optional. Array of environment variables used to populate a placeholder. Separate multiple placeholders with commas. For example: `{0},{1}` <br><br>Supported values: `workspaceId`, `workspaceName`, `primaryKey`, `MicrosoftAwsAccount`, `subscriptionId` | |**label** | String | Defines the text for the label above a text box. | |**value** | String | Defines the value to present in the text box, supports placeholders. | |**rows** | Rows | Optional. Defines the rows in the user interface area. By default, set to **1**. | |**wideLabel** |Boolean | Optional. Determines a wide label for long strings. By default, set to `false`. | -- #### InfoMessage
-Defines an inline information message. For example:
+Here's an example of an inline information message:
:::image type="content" source="media/create-codeless-connector/inline-information-message.png" alt-text="Screenshot of an inline information message.":::
In contrast, the following image shows a *non*-inline information message:
:::image type="content" source="media/create-codeless-connector/non-inline-information-message.png" alt-text="Screenshot of a non-inline information message.":::
-**Sample code**:
-
-```json
-instructions: [
- new InfoMessageInstructionModel({
- text:”Microsoft Defender for Endpoint… “,
- visible: true,
- inline: true,
- }),
- new InfoMessageInstructionModel({
- text:”In order to export… “,
- visible: true,
- inline: false,
- }),
-
- ]
-```
-**Parameters**: `InfoMessageInstructionModelParameters`
-
-|Name |Type |Description |
+|Array Value |Type |Description |
|||| |**text** | String | Define the text to display in the message. | |**visible** | Boolean | Determines whether the message is displayed. | |**inline** | Boolean | Determines how the information message is displayed. <br><br>- `true`: (Recommended) Shows the information message embedded in the instructions. <br>- `false`: Adds a blue background. |
+#### InstructionStepsGroup
+Here's an example of an expandable instruction group:
-#### LinkInstructionModel
-
-Displays a link to other pages in the Azure portal, as a button or a link. For example:
-
+|Array Value |Type |Description |
+||||
+|**title** | String | Defines the title for the instruction step. |
+|**canCollapseAllSections** | Boolean | Optional. Determines whether the section is a collapsible accordion or not. |
+|**noFxPadding** | Boolean | Optional. If `true`, reduces the height padding to save space. |
+|**expanded** | Boolean | Optional. If `true`, shows as expanded by default. |
+For a detailed example, see the configuration JSON for the [Windows DNS connector](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Windows%20Server%20DNS/Data%20Connectors/template_DNS.JSON).
-**Sample code**:
+#### InstallAgent
-```json
-new LinkInstructionModel({linkType: ΓÇ£OpenPolicyAssignmentΓÇ¥, policyDefinitionGuid: <GUID>, assignMode = ΓÇ£PolicyΓÇ¥})
+Some **InstallAgent** types appear as a button, others will appear as a link. Here are examples of both:
-new LinkInstructionModel({ linkType: LinkType.OpenAzureActivityLog } )
-```
-**Parameters**: `InfoMessageInstructionModelParameters`
-|Name |Type |Description |
+|Array Values |Type |Description |
|||| |**linkType** | ENUM | Determines the link type, as one of the following values: <br><br>`InstallAgentOnWindowsVirtualMachine`<br>`InstallAgentOnWindowsNonAzure`<br> `InstallAgentOnLinuxVirtualMachine`<br> `InstallAgentOnLinuxNonAzure`<br>`OpenSyslogSettings`<br>`OpenCustomLogsSettings`<br>`OpenWaf`<br> `OpenAzureFirewall` `OpenMicrosoftAzureMonitoring` <br> `OpenFrontDoors` <br>`OpenCdnProfile` <br>`AutomaticDeploymentCEF` <br> `OpenAzureInformationProtection` <br> `OpenAzureActivityLog` <br> `OpenIotPricingModel` <br> `OpenPolicyAssignment` <br> `OpenAllAssignmentsBlade` <br> `OpenCreateDataCollectionRule` |
-|**policyDefinitionGuid** | String | Optional. For policy-based connectors, defines the GUID of the built-in policy definition. |
+|**policyDefinitionGuid** | String | Required when using the **OpenPolicyAssignment** linkType. For policy-based connectors, defines the GUID of the built-in policy definition. |
|**assignMode** | ENUM | Optional. For policy-based connectors, defines the assign mode, as one of the following values: `Initiative`, `Policy` | |**dataCollectionRuleType** | ENUM | Optional. For DCR-based connectors, defines the type of data collection rule type as one of the following: `SecurityEvent`, `ForwardEvent` |
-To define an inline link using markdown, use the following example as a guide:
+### metadata
-```markdown
-<value>Follow the instructions found on article [Connect Microsoft Sentinel to your threat intelligence platform]({0}). Once the application is created you will need to record the Tenant ID, Client ID and Client Secret.</value>
-```
+This section provides metadata in the data connector UI under the **Description** area.
-The code sample listed above shows an inline link that looks like the following image:
+| Collection Value |Type |Description |
+||||
+| **kind** | String | Defines the kind of ARM template you're creating. Always use `dataConnector`. |
+| **source** | String | Describes your data source, using the following syntax: <br>`{`<br>`"kind":`string<br>`"name":`string<br>`}`|
+| **author** | String | Describes the data connector author, using the following syntax: <br>`{`<br>`"name":`string<br>`}`|
+| **support** | String | Describe the support provided for the data connector using the following syntax: <br>`{`<br>`"tier":`string,<br>`"name":`string,<br>`"email":`string,<br>`"link":`URL string<br>`}`|
+### permissions
-To define a link as an ARM template, use the following example as a guide:
+|Array value |Type |Description |
+||||
+| **customs** | String | Describes any custom permissions required for your data connection, in the following syntax: <br>`{`<br>`"name":`string`,`<br>`"description":`string<br>`}` <br><br>Example: The **customs** value displays in Microsoft Sentinel **Prerequisites** section with a blue informational icon. In the GitHub example, this correlates to the line **GitHub API personal token Key: You need access to GitHub personal token...** |
+| **licenses** | ENUM | Defines the required licenses, as one of the following values: `OfficeIRM`,`OfficeATP`, `Office365`, `AadP1P2`, `Mcas`, `Aatp`, `Mdatp`, `Mtp`, `IoT` <br><br>Example: The **licenses** value displays in Microsoft Sentinel as: **License: Required Azure AD Premium P2**|
+| **resourceProvider** | [resourceProvider](#resourceprovider) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel **Prerequisites** section as: <br>**Workspace: read and write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**|
+| **tenant** | array of ENUM values<br>Example:<br><br>`"tenant": [`<br>`"GlobalADmin",`<br>`"SecurityAdmin"`<br>`]`<br> | Defines the required permissions, as one or more of the following values: `"GlobalAdmin"`, `"SecurityAdmin"`, `"SecurityReader"`, `"InformationProtection"` <br><br>Example: displays the **tenant** value in Microsoft Sentinel as: **Tenant Permissions: Requires `Global Administrator` or `Security Administrator` on the workspace's tenant**|
-```markdown
- <value>1. Click the **Deploy to Azure** button below.
-[![Deploy To Azure]({0})]({1})
-```
+#### resourceProvider
-The code sample listed above shows a link button that looks like the following image:
+|sub array value |Type |Description |
+||||
+| **provider** | ENUM | Describes the resource provider, with one of the following values: <br>- `Microsoft.OperationalInsights/workspaces` <br>- `Microsoft.OperationalInsights/solutions`<br>- `Microsoft.OperationalInsights/workspaces/datasources`<br>- `microsoft.aadiam/diagnosticSettings`<br>- `Microsoft.OperationalInsights/workspaces/sharedKeys`<br>- `Microsoft.Authorization/policyAssignments` |
+| **providerDisplayName** | String | A list item under **Prerequisites** that will display a red "x" or green checkmark when the **requiredPermissions** are validated in the connector page. Example, `"Workspace"` |
+| **permissionsDisplayText** | String | Display text for *Read*, *Write*, or *Read and Write* permissions that should correspond to the values configured in **requiredPermissions** |
+| **requiredPermissions** | `{`<br>`"action":`Boolean`,`<br>`"delete":`Boolean`,`<br>`"read":`Boolean`,`<br>`"write":`Boolean<br>`}` | Describes the minimum permissions required for the connector. |
+| **scope** | ENUM | Describes the scope of the data connector, as one of the following values: `"Subscription"`, `"ResourceGroup"`, `"Workspace"` |
-ΓÇâ:::image type="content" source="media/create-codeless-connector/sample-markdown-link-button.png" alt-text="Screenshot of the link button created by the earlier sample markdown.":::
+### sampleQueries
-#### InstructionStep
+|array value |Type |Description |
+||||
+| **description** | String | A meaningful description for the sample query.<br><br>Example: `Top 10 vulnerabilities detected` |
+| **query** | String | Sample query used to fetch the data type's data. <br><br>Example: `{{graphQueriesTableName}}\n | sort by TimeGenerated\n | take 10` |
-Displays a group of instructions, as an expandable accordion or non-expandable, separate from the main instructions section.
+### Configure other link options
-For example:
+To define an inline link using markdown, use the following example. Here a link is provided in an instruction description:
+```json
+{
+ "title": "",
+ "description": "Make sure to configure the machine's security according to your organization's security policy\n\n\n[Learn more >](https://aka.ms/SecureCEF)"
+}
+```
-**Parameters**: `InstructionStepsGroupModelParameters`
+To define a link as an ARM template, use the following example as a guide:
-|Name |Type |Description |
-||||
-|**title** | String | Defines the title for the instruction step. |
-|**instructionSteps** | [InstructionStep[]](#instructionstep) | Optional. Defines an array of inner instruction steps. |
-|**canCollapseAllSections** | Boolean | Optional. Determines whether the section is a collapsible accordion or not. |
-|**noFxPadding** | Boolean | Optional. If `true`, reduces the height padding to save space. |
-|**expanded** | Boolean | Optional. If `true`, shows as expanded by default. |
+```json
+{
+ "title": "Azure Resource Manager (ARM) template",
+ "description": "1. Click the **Deploy to Azure** button below.\n\n\t[![Deploy To Azure](https://aka.ms/deploytoazurebutton)]({URL to custom ARM template})"
+}
+```
+### Validate the data connector page user experience
+Follow these steps to render and validate the connector user experience.
+1. The test utility can be accessed by this URL - https://aka.ms/sentineldataconnectorvalidateurl
+1. Go to Microsoft Sentinel -> Data Connectors
+1. Click the "import" button and select a json file that only contains the `connectorUiConfig` section of your data connector.
+For more information on this validation tool, see the [Build the connector](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors#build-the-connector) instructions in our GitHub build guide.
+> [!NOTE]
+> Because the **APIKey** instruction parameter is specific to the codeless connector, temporarily remove this section to use the validation tool, or it will fail.
+>
## Configure your connector's polling settings
The following code shows the syntax of the `pollingConfig` section of the [CCP c
```json "pollingConfig": {
- auth": {
- "authType": <string>,
+ "auth": {
},
- "request": {…
+ "request": {
},
- "response": {…
+ "response": {
},
- "paging": {…
+ "paging": {
} } ```
The `pollingConfig` section includes the following properties:
| Name | Type | Description | | | -- | |
-| **id** | String | Mandatory. Defines a unique identifier for a rule or configuration entry, using one of the following values: <br><br>- A GUID (recommended) <br>- A document ID, if the data source resides in Azure Cosmos DB |
| **auth** | String | Describes the authentication properties for polling the data. For more information, see [auth configuration](#auth-configuration). | | <a name="authtype"></a>**auth.authType** | String | Mandatory. Defines the type of authentication, nested inside the `auth` object, as one of the following values: `Basic`, `APIKey`, `OAuth2` | | **request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). |
The following code shows an example of the `pollingConfig` section of the [CCP c
} ```
-## Add placeholders to your connector's JSON configuration file
-
-You may want to create a JSON configuration file template, with placeholders parameters, to reuse across multiple connectors, or even to create a connector with data that you don't currently have.
-
-To create placeholder parameters, define an additional array named `userRequestPlaceHoldersInput` in the [Instructions](#instructions) section of your [CCP JSON configuration](#create-a-connector-json-configuration-file) file, using the following syntax:
-
-```json
-"instructions": [
- {
- "parameters": {
- "enable": "true",
- "userRequestPlaceHoldersInput": [
- {
- "displayText": "Organization Name",
- "requestObjectKey": "apiEndpoint",
- "placeHolderName": "{{placeHolder1}}"
- }
- ]
- },
- "type": "APIKey"
- }
- ]
-```
-
-The `userRequestPlaceHoldersInput` parameter includes the following attributes:
-
-|Name |Type |Description |
-||||
-|**DisplayText** | String | Defines the text box display value, which is displayed to the user when connecting. |
-|**RequestObjectKey** |String | Defines the ID used to identify where in the request section of the API call to replace the placeholder value with a user value. <br><br>If you don't use this attribute, use the `PollingKeyPaths` attribute instead. |
-|**PollingKeyPaths** |String |Defines an array of [JsonPath](https://www.npmjs.com/package/JSONPath) objects that directs the API call to anywhere in the template, to replace a placeholder value with a user value.<br><br>**Example**: `"pollingKeyPaths":["$.request.queryParameters.test1"]` <br><br>If you don't use this attribute, use the `RequestObjectKey` attribute instead. |
-|**PlaceHolderName** |String |Defines the name of the placeholder parameter in the JSON template file. This can be any unique value, such as `{{placeHolder}}`. |
-- ## Deploy your connector in Microsoft Sentinel and start ingesting data
After creating your [JSON configuration file](#create-a-connector-json-configura
# [Deploy via ARM template](#tab/deploy-via-arm-template)
- Use a JSON configuration file to create an ARM template to use when deploying your connector. To ensure that your data connector gets deployed to the correct workspace, make sure to either define the workspace for the ARM template to deploy when creating your JSON file, or select the workspace when deploying the ARM template.
+ Wrap your JSON configuration collections in an ARM template to deploy your connector. To ensure that your data connector gets deployed to the correct workspace, make sure to either define the workspace in the ARM template, or select the workspace when deploying the ARM template.
1. Prepare an [ARM template JSON file](/azure/templates/microsoft.securityinsights/dataconnectors) for your connector. For example, see the following ARM template JSON files: - Data connector in the [Slack solution](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SlackAudit/Data%20Connectors/SlackNativePollerConnector/azuredeploy_Slack_native_poller_connector.json)
- - [Atlassian Jira Audit data connector](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/AtlassianJiraAudit/Data%20Connectors/JiraNativePollerConnector/azuredeploy_Jira_native_poller_connector.json)
+ - Data connector in the [GitHub solution](https://github.com/Azure/Azure-Sentinel/blob/3d324aed163c1702ba0cab6de203ac0bf4756b8c/Solutions/GitHub/Data%20Connectors/azuredeploy_GitHub_native_poller_connector.json)
1. In the Azure portal, search for **Deploy a custom template**.
After creating your [JSON configuration file](#create-a-connector-json-configura
In your Microsoft Sentinel data connector page, follow the instructions you've provided to connect to your data connector.
- The data connector page in Microsoft Sentinel is controlled by the [InstructionStep](#instructionstep) configuration in the `connectorUiConfig` element of the [CCP JSON configuration](#create-a-connector-json-configuration-file) file. If you have issues with the user interface connection, make sure that you have the correct configuration for your authentication type.
+ The data connector page in Microsoft Sentinel is controlled by the [InstructionSteps](#instructionsteps) configuration in the `connectorUiConfig` element of the [CCP JSON configuration](#create-a-connector-json-configuration-file) file. If you have issues with the user interface connection, make sure that you have the correct configuration for your authentication type.
# [Connect via API](#tab/connect-via-api)
After creating your [JSON configuration file](#create-a-connector-json-configura
|**APIKey** |Define: <br>- `kind` as `APIKey` <br>- `APIKey` as your full API key string, in quotes|
- If you're using a [template configuration file with placeholder data](#add-placeholders-to-your-connectors-json-configuration-file), send the data together with the `placeHolderValue` attributes that hold the user data. For example:
+ If you're using a [placeholder data in your template](#apikey), send the data together with the `placeHolderValue` attributes that hold the user data. For example:
```json "requestConfigUserInputValues": [
Use one of the following methods:
If you haven't yet, share your new codeless data connector with the Microsoft Sentinel community! Create a solution for your data connector and share it in the Microsoft Sentinel Marketplace.
-For more information, see [About Microsoft Sentinel solutions](sentinel-solutions.md).
+For more information, see
+- [About Microsoft Sentinel solutions](sentinel-solutions.md).
+- [Data connector ARM template reference](/azure/templates/microsoft.securityinsights/dataconnectors#dataconnectors-objects-1)
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customer-managed-keys.md
This article provides background information and steps to configure a [customer-
- The CMK capability requires a Log Analytics dedicated cluster with at least a 500 GB/day commitment tier. Multiple workspaces can be linked to the same dedicated cluster, and they will share the same customer-managed key. -- After you complete the steps in this guide and before you use the workspace, for onboarding confirmation, contact the [Microsoft Sentinel Product Group](mailto:azuresentinelCMK@microsoft.com).
+- You must contact the [Microsoft Sentinel Product Group](mailto:azuresentinelCMK@microsoft.com) for onboarding confirmation as part of completing the steps in this guide and before you use the workspace.
- Learn about [Log Analytics Dedicated Cluster Pricing](../azure-monitor/logs/logs-dedicated-clusters.md#cluster-pricing-model).
This article provides background information and steps to configure a [customer-
- The Microsoft Sentinel CMK capability is provided only to *workspaces in Log Analytics dedicated clusters* that have *not already been onboarded to Microsoft Sentinel*. -- The following CMK-related changes *are not supported* because they will be ineffective (Microsoft Sentinel data will continue to be encrypted only by the Microsoft-managed key, and not by the CMK):
+- The following CMK-related changes *are not supported* because they are ineffective (Microsoft Sentinel data continues to be encrypted only by the Microsoft-managed key, and not by the CMK):
- Enabling CMK on a workspace that's *already onboarded* to Microsoft Sentinel. - Enabling CMK on a cluster that contains Sentinel-onboarded workspaces.
This article provides background information and steps to configure a [customer-
- Changing the customer-managed key to another key (with another URI) currently *isn't supported*. You should change the key by [rotating it](../azure-monitor/logs/customer-managed-keys.md#key-rotation). - Before you make any CMK changes to a production workspace or to a Log Analytics cluster, contact the [Microsoft Sentinel Product Group](mailto:azuresentinelCMK@microsoft.com).
+- CMK enabled workspaces don't support [search jobs](investigate-large-datasets.md).
## How CMK works
-The Microsoft Sentinel solution uses several storage resources for log collection and features, including a Log Analytics dedicated cluster. As part of the Microsoft Sentinel CMK configuration, you will have to configure the CMK settings on the related Log Analytics dedicated cluster. Data saved by Microsoft Sentinel in storage resources other than Log Analytics will also be encrypted using the customer-managed key configured for the dedicated Log Analytics cluster.
+The Microsoft Sentinel solution uses several storage resources for log collection and features, including a Log Analytics dedicated cluster. As part of the Microsoft Sentinel CMK configuration, you must configure the CMK settings on the related Log Analytics dedicated cluster. Data saved by Microsoft Sentinel in storage resources other than Log Analytics is also encrypted using the customer-managed key configured for the dedicated Log Analytics cluster.
-See the following additional relevant documentation:
+For more information, see:
- [Azure Monitor customer-managed keys (CMK)](../azure-monitor/logs/customer-managed-keys.md). - [Azure Key Vault](../key-vault/general/overview.md). - [Log Analytics dedicated clusters](../azure-monitor/logs/logs-dedicated-clusters.md). > [!NOTE]
-> If you enable CMK on Microsoft Sentinel, any Public Preview feature that does not support CMK will not be enabled.
+> If you enable CMK on Microsoft Sentinel, any Public Preview feature that does not support CMK aren't enabled.
## Enable CMK
To provision CMK, follow these steps: 
3. Register to the Azure Cosmos DB Resource Provider. 4. Add an access policy to your Azure Key Vault instance.-
-5. Onboard the workspace to Microsoft Sentinel via the [Onboarding API](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx).
+5. Contact the Microsoft Sentinel Product group to confirm onboarding
+6. Onboard the workspace to Microsoft Sentinel via the [Onboarding API](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx).
### STEP 1: Create an Azure Key Vault and generate or import a key
To provision CMK, follow these steps: 
### STEP 2: Enable CMK on your Log Analytics workspace
-Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that will be used as the Microsoft Sentinel workspace in the following steps.
+Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that is used as the Microsoft Sentinel workspace in the following steps.
### STEP 3: Register to the Azure Cosmos DB Resource Provider
Follow the instructions to [Register the Azure Cosmos DB Resource Provider](../c
Make sure to add access from Azure Cosmos DB to your Azure Key Vault instance. Follow the Azure Cosmos DB instructions to [add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-access-policy) with an Azure Cosmos DB principal.
-### STEP 5: Onboard the workspace to Microsoft Sentinel via the onboarding API
+### STEP 5: Contact the Microsoft Sentinel Product group to confirm onboarding
+
+You must confirm onboarding of your CMK enabled workspace by contacting the [Microsoft Sentinel Product Group](mailto:azuresentinelCMK@microsoft.com).
+
+### STEP 6: Onboard the workspace to Microsoft Sentinel via the onboarding API
Onboard the workspace to Microsoft Sentinel via the [Onboarding API](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx). ## Key Encryption Key revocation or deletion
-In the event that a user revokes the key encryption key (the CMK), either by deleting it or removing access for the dedicated cluster and Azure Cosmos DB Resource Provider, Microsoft Sentinel will honor the change and behave as if the data is no longer available, within one hour. At this point, any operation that uses persistent storage resources such as data ingestion, persistent configuration changes, and incident creation, will be prevented. Previously stored data will not be deleted but will remain inaccessible. Inaccessible data is governed by the data-retention policy and will be purged in accordance with that policy.
+If a user revokes the key encryption key (the CMK), either by deleting it or removing access for the dedicated cluster and Azure Cosmos DB Resource Provider, Microsoft Sentinel honors the change and behave as if the data is no longer available, within one hour. At this point, any operation that uses persistent storage resources such as data ingestion, persistent configuration changes, and incident creation, is prevented. Previously stored data isn't deleted but remains inaccessible. Inaccessible data is governed by the data-retention policy and is purged in accordance with that policy.
The only operation possible after the encryption key is revoked or deleted is account deletion.
-If access is restored after revocation, Microsoft Sentinel will restore access to the data within an hour.
+If access is restored after revocation, Microsoft Sentinel restores access to the data within an hour.
-Access to the data can be revoked by disabling the customer-managed key in the key vault, or deleting the access policy to the key, for both the dedicated Log Analytics cluster and Azure Cosmos DB. Revoking access by removing the key from the dedicated Log Analytics cluster, or by removing the identity associated with the dedicated Log Analytics cluster is not supported.
+Access to the data can be revoked by disabling the customer-managed key in the key vault, or deleting the access policy to the key, for both the dedicated Log Analytics cluster and Azure Cosmos DB. Revoking access by removing the key from the dedicated Log Analytics cluster, or by removing the identity associated with the dedicated Log Analytics cluster isn't supported.
To understand more about how this works in Azure Monitor, see [Azure Monitor CMK revocation](../azure-monitor/logs/customer-managed-keys.md#key-revocation).
After rotating a key, you must explicitly update the dedicated Log Analytics clu
## Replacing a customer-managed key
-Microsoft Sentinel does not support replacing a customer-managed key. You should use the [key rotation capability](#customer-managed-key-rotation) instead.
+Microsoft Sentinel doesn't support replacing a customer-managed key. You should use the [key rotation capability](#customer-managed-key-rotation) instead.
## Next steps In this document, you learned how to set up a customer-managed key in Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
sentinel Abnormalsecurity Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/abnormalsecurity-using-azure-function.md
ABNORMAL_CASES_CL
To integrate with AbnormalSecurity (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Abnormal Security API Token**: An Abnormal Security API Token is required. [See the documentation to learn more about Abnormal Security API](https://app.swaggerhub.com/apis/abnormal-security/abx/). **Note:** An Abnormal Security account is required
Use the following step-by-step instructions to deploy the Abnormal Security data
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-abnormalsecurity-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
If you're already signed in, go to the next step.
logAnalyticsUri (optional) (add any other settings required by the Function App) Set the `uri` value to: `<add uri value>`
->Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
- Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us.` 4. Once all application settings have been entered, click **Save**.
sentinel Ai Vectra Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ai-vectra-stream.md
Install the Linux agent on sperate Linux instance.
2. Configure the logs to be collected
-Follow the configuration steps below to get Vectra Stream metadata into Microsoft Sentinel. The Log Analytics agent is leveraged to send custom JSON into Azure Monitor, enabling the storage of the metadata into a custom table. For more information, refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json).
+Follow the configuration steps below to get Vectra Stream metadata into Microsoft Sentinel. The Log Analytics agent is leveraged to send custom JSON into Azure Monitor, enabling the storage of the metadata into a custom table. For more information, refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json).
1. Download config file for the log analytics agent: VectraStream.conf (located in the Connector folder within the Vectra solution: https://aka.ms/sentinel-aivectrastream-conf). 2. Login to the server where you have installed Azure Log Analytics agent. 3. Copy VectraStream.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
sentinel Alicloud Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/alicloud-using-azure-function.md
AliCloud
To integrate with AliCloud (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **AliCloudAccessKeyId** and **AliCloudAccessKey** are required for making API calls.
To integrate with AliCloud (using Azure Function) make sure you have:
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the AliCloud data connecto
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-AliCloudAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Armorblox Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/armorblox-using-azure-function.md
Armorblox_CL
To integrate with Armorblox (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Armorblox Instance Details**: **ArmorbloxInstanceName** OR **ArmorbloxInstanceURL** is required - **Armorblox API Credentials**: **ArmorbloxAPIToken** is required
To integrate with Armorblox (using Azure Function) make sure you have:
> This connector uses Azure Functions to connect to the Armorblox API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Armorblox API**
Use the following step-by-step instructions to deploy the Armorblox data connect
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-armorblox-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Atlassian Confluence Audit Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-confluence-audit-using-azure-function.md
ConfluenceAudit
To integrate with Atlassian Confluence Audit (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **ConfluenceAccessToken**, **ConfluenceUsername** is required for REST API. [See the documentation to learn more about API](https://developer.atlassian.com/cloud/confluence/rest/api-group-audit/). Check all [requirements and follow the instructions](https://developer.atlassian.com/cloud/confluence/rest/intro/#auth) for obtaining credentials.
To integrate with Atlassian Confluence Audit (using Azure Function) make sure yo
> This connector uses Azure Functions to connect to the Confluence REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Confluence API**
Use the following step-by-step instructions to deploy the Confluence Audit data
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-confluenceauditapi-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Atlassian Jira Audit Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-jira-audit-using-azure-function.md
JiraAudit
To integrate with Atlassian Jira Audit (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **JiraAccessToken**, **JiraUsername** is required for REST API. [See the documentation to learn more about API](https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-audit-records/). Check all [requirements and follow the instructions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/#authentication) for obtaining credentials.
To integrate with Atlassian Jira Audit (using Azure Function) make sure you have
> This connector uses Azure Functions to connect to the Jira REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Jira Audit data connec
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-jiraauditapi-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Auth0 Access Management Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/auth0-access-management-using-azure-function.md
Auth0AM_CL
To integrate with Auth0 Access Management (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **API token** is required. [See the documentation to learn more about API token](https://auth0.com/docs/secure/tokens/access-tokens/get-management-api-access-tokens-for-production)
To integrate with Auth0 Access Management (using Azure Function) make sure you h
> This connector uses Azure Functions to connect to the Auth0 Management APIs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Auth0 Management API**
Use the following step-by-step instructions to deploy the Auth0 Access Managemen
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-Auth0AccessManagement-azuredeploy) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Automated Logic Webctrl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/automated-logic-webctrl.md
Event
1. Install and onboard the Microsoft agent for Windows.
-Learn about [agent setup](https://learn.microsoft.com/services-hub/health/mma-setup) and [windows events onboarding](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-windows-events).
+Learn about [agent setup](/services-hub/health/mma-setup) and [windows events onboarding](/azure/azure-monitor/agents/data-sources-windows-events).
You can skip this step if you have already installed the Microsoft agent for Windows
sentinel Bitglass Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/bitglass-using-azure-function.md
BitglassLogs_CL
To integrate with Bitglass (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **BitglassToken** and **BitglassServiceURL** are required for making API calls.
To integrate with Bitglass (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Bitglass data connecto
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-bitglass-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Box Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/box-using-azure-function.md
BoxEvents
To integrate with Box (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Box API Credentials**: Box config JSON file is required for Box REST API JWT authentication. [See the documentation to learn more about JWT authentication](https://developer.box.com/guides/authentication/jwt/).
To integrate with Box (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the Box REST API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Box data connector man
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-BoxDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cisco Asa Ftd Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-asa-ftd-via-ama.md
CommonSecurityLog
To integrate with Cisco ASA/FTD via AMA (Preview) make sure you have: -- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](https://learn.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
## Vendor installation instructions
sentinel Cisco Duo Security Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-duo-security-using-azure-function.md
CiscoDuo_CL
To integrate with Cisco Duo Security (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Cisco Duo API credentials**: Cisco Duo API credentials with permission *Grant read log* is required for Cisco Duo API. See the [documentation](https://duo.com/docs/adminapi#first-steps) to learn more about creating Cisco Duo API credentials.
To integrate with Cisco Duo Security (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the Cisco Duo API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-CiscoDuoSecurity-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cisco Meraki https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-meraki.md
Typically, you should install the agent on a different computer from the one on
2. Configure the logs to be collected
-Follow the configuration steps below to get Cisco Meraki device logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
+Follow the configuration steps below to get Cisco Meraki device logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
For Cisco Meraki logs, we have issues while parsing the data by OMS agent data using default settings. So we advice to capture the logs into custom table **meraki_CL** using below instructions. 1. Login to the server where you have installed OMS agent.
sentinel Cisco Secure Endpoint Amp Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-secure-endpoint-amp-using-azure-function.md
CiscoSecureEndpoint_CL
To integrate with Cisco Secure Endpoint (AMP) (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Cisco Secure Endpoint API credentials**: Cisco Secure Endpoint Client ID and API Key are required. [See the documentation to learn more about Cisco Secure Endpoint API](https://api-docs.amp.cisco.com/api_resources?api_host=api.amp.cisco.com&api_version=v1). [API domain](https://api-docs.amp.cisco.com) must be provided as well.
To integrate with Cisco Secure Endpoint (AMP) (using Azure Functions) make sure
> This connector uses Azure Functions to connect to the Cisco Secure Endpoint API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Functions App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Functions App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-ciscosecureendpoint-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cisco Umbrella Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-umbrella-using-azure-function.md
Cisco_Umbrella
To integrate with Cisco Umbrella (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Amazon S3 REST API Credentials/permissions**: **AWS Access Key Id**, **AWS Secret Access Key**, **AWS S3 Bucket Name** are required for Amazon S3 REST API.
To integrate with Cisco Umbrella (using Azure Function) make sure you have:
> This connector has been updated to support [cisco umbrella version 5 and version 6.](https://docs.umbrella.com/deployment-umbrella/docs/log-formats-and-versioning)
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Cisco Umbrella data co
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-CiscoUmbrellaConn-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cloudflare Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cloudflare-using-azure-function.md
Cloudflare_CL
To integrate with Cloudflare (Preview) (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).-- **Azure Blob Storage connection string and container name**: Azure Blob Storage connection string and container name where the logs are pushed to by Cloudflare Logpush. [See the documentation to learn more about creating Azure Blob Storage container.](https://learn.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-portal)
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
+- **Azure Blob Storage connection string and container name**: Azure Blob Storage connection string and container name where the logs are pushed to by Cloudflare Logpush. [See the documentation to learn more about creating Azure Blob Storage container.](/azure/storage/blobs/storage-quickstart-blobs-portal)
## Vendor installation instructions
To integrate with Cloudflare (Preview) (using Azure Functions) make sure you hav
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Cloudflare data connec
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-CloudflareDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cohesity Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cohesity-using-azure-function.md
Cohesity_CL
To integrate with Cohesity (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Azure Blob Storage connection string and container name**: Azure Blob Storage connection string and container name
To integrate with Cohesity (using Azure Functions) make sure you have:
> This connector uses Azure Functions that connect to the Azure Blob Storage and KeyVault. This might result in additional costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/), [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) and [Azure KeyVault pricing page](https://azure.microsoft.com/pricing/details/key-vault/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Get a Cohesity DataHawk API key (see troubleshooting [instruction 1](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/CohesitySecurity/Data%20Connectors/Helios2Sentinel/IncidentProducer))**
-**STEP 2 - Register Azure app ([link](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps)) and save Application (client) ID, Directory (tenant) ID, and Secret Value ([instructions](https://learn.microsoft.com/azure/healthcare-apis/register-application)). Grant it Azure Storage (user_impersonation) permission. Also, assign the 'Microsoft Sentinel Contributor' role to the application in the appropriate subscription.**
+**STEP 2 - Register Azure app ([link](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps)) and save Application (client) ID, Directory (tenant) ID, and Secret Value ([instructions](/azure/healthcare-apis/register-application)). Grant it Azure Storage (user_impersonation) permission. Also, assign the 'Microsoft Sentinel Contributor' role to the application in the appropriate subscription.**
**STEP 3 - Deploy the connector and the associated Azure Functions**.
sentinel Corelight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/corelight.md
Install the agent on the Server where the Corelight logs are generated.
2. Configure the logs to be collected
-Follow the configuration steps below to get Corelight logs into Microsoft Sentinel. This configuration enriches events generated by Corelight module to provide visibility on log source information for Corelight logs. Refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
+Follow the configuration steps below to get Corelight logs into Microsoft Sentinel. This configuration enriches events generated by Corelight module to provide visibility on log source information for Corelight logs. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
1. Download config file: [corelight.conf](https://aka.ms/sentinel-Corelight-conf/). 2. Login to the server where you have installed Azure Log Analytics agent. 3. Copy corelight.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
sentinel Crowdstrike Falcon Data Replicator Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-data-replicator-using-azure-function.md
CrowdstrikeReplicator
To integrate with Crowdstrike Falcon Data Replicator (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **SQS and AWS S3 account credentials/permissions**: **AWS_SECRET**, **AWS_REGION_NAME**, **AWS_KEY**, **QUEUE_URL** is required. [See the documentation to learn more about data pulling](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/). To start, contact CrowdStrike support. At your request they will create a CrowdStrike managed Amazon Web Services (AWS) S3 bucket for short term storage purposes as well as a SQS (simple queue service) account for monitoring changes to the S3 bucket.
To integrate with Crowdstrike Falcon Data Replicator (using Azure Function) make
> This connector uses Azure Functions to connect to the S3 bucket to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Crowdstrike Falcon Dat
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-CrowdstrikeReplicator-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cyberarkepm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyberarkepm.md
CyberArkEPM
To integrate with CyberArkEPM make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **CyberArkEPMUsername**, **CyberArkEPMPassword** and **CyberArkEPMServerURL** are required for making API calls.
To integrate with CyberArkEPM make sure you have:
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Azure Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the CyberArk EPM data conn
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-CyberArkEPMAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Cybersixgill Actionable Alerts Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cybersixgill-actionable-alerts-using-azure-function.md
CyberSixgill_Alerts
To integrate with Cybersixgill Actionable Alerts (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **Client_ID** and **Client_Secret** are required for making API calls.
To integrate with Cybersixgill Actionable Alerts (using Azure Function) make sur
> This connector uses Azure Functions to connect to the Cybersixgill API to pull Alerts into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
Use the following step-by-step instructions to deploy the Cybersixgill Actionabl
**1. Deploy a Function App**
-> NOTE:You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://github.com/syed-loginsoft/Azure-Sentinel/blob/cybersixgill/Solutions/Cybersixgill-Actionable-Alerts/Data%20Connectors/CybersixgillAlerts.zip?raw=true) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Darktrace Connector For Microsoft Sentinel Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/darktrace-connector-for-microsoft-sentinel-rest-api.md
darktrace_model_alerts_CL
To integrate with Darktrace Connector for Microsoft Sentinel REST API make sure you have: - **Darktrace Prerequisites**: To use this Data Connector a Darktrace master running v5.2+ is required.
- Data is sent to the [Azure Monitor HTTP Data Collector API](https://learn.microsoft.com/azure/azure-monitor/logs/data-collector-api) over HTTPs from Darktrace masters, therefore outbound connectivity from the Darktrace master to Microsoft Sentinel REST API is required.
+ Data is sent to the [Azure Monitor HTTP Data Collector API](/azure/azure-monitor/logs/data-collector-api) over HTTPs from Darktrace masters, therefore outbound connectivity from the Darktrace master to Microsoft Sentinel REST API is required.
- **Filter Darktrace Data**: During configuration it is possible to set up additional filtering on the Darktrace System Configuration page to constrain the amount or types of data sent. - **Try the Darktrace Sentinel Solution**: You can get the most out of this connector by installing the Darktrace Solution for Microsoft Sentinel. This will provide workbooks to visualise alert data and analytics rules to automatically create alerts and incidents from Darktrace Model Breaches and AI Analyst incidents.
sentinel Digital Shadows Searchlight Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-shadows-searchlight-using-azure-function.md
DigitalShadows_CL
To integrate with Digital Shadows Searchlight (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **Digital Shadows account ID, secret and key** is required. See the documentation to learn more about API on the `https://portal-digitalshadows.com/learn/searchlight-api/overview/description`.
To integrate with Digital Shadows Searchlight (using Azure Functions) make sure
> This connector uses Azure Functions to connect to a 'Digital Shadows Searchlight' to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the 'Digital Shadows Searchlight' API**
Use this method for automated deployment of the 'Digital Shadows Searchlight' co
[![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-Digitalshadows-azuredeploy) 2. Select the preferred **Subscription**, **Resource Group** and **Location**. 3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, 'and/or Other required fields'.
->Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
Use this method for automated deployment of the 'Digital Shadows Searchlight' co
logAnalyticsUri (optional) (add any other settings required by the Function App) Set the `uri` value to: `<add uri value>`
->Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
- Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: https://<CustomerId>.ods.opinsights.azure.us. 4. Once all application settings have been entered, click **Save**.
sentinel Flare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/flare.md
As an organization administrator, authenticate on [Flare](https://app.flare.syst
Click on 'Create a new alert channel' and select 'Microsoft Sentinel'. Enter your Shared Key And WorkspaceID. Save the Alert Channel.
- For more help and details, see our [Azure configuration documentation](https://learn.microsoft.com/azure/sentinel/connect-data-sources).
+ For more help and details, see our [Azure configuration documentation](/azure/sentinel/connect-data-sources).
{0}
sentinel Github Using Webhooks Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/github-using-webhooks-using-azure-function.md
Use the following step-by-step instructions to deploy the GitHub webhook data co
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-GitHubWebhookAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Google Apigeex Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-apigeex-using-azure-function.md
ApigeeX_CL
To integrate with Google ApigeeX (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **GCP service account**: GCP service account with permissions to read logs is required for GCP Logging API. Also json file with service account key is required. See the documentation to learn more about [required permissions](https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions), [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys).
To integrate with Google ApigeeX (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-ApigeeXDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Google Cloud Platform Cloud Monitoring Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-cloud-platform-cloud-monitoring-using-azure-function.md
GCP_MONITORING_CL
To integrate with Google Cloud Platform Cloud Monitoring (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **GCP service account**: GCP service account with permissions to read Cloud Monitoring metrics is required for GCP Monitoring API (required *Monitoring Viewer* role). Also json file with service account key is required. See the documentation to learn more about [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys).
To integrate with Google Cloud Platform Cloud Monitoring (using Azure Functions)
> This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-GCPMonitorDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Google Cloud Platform Dns Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-cloud-platform-dns-using-azure-function.md
GCP_DNS_CL
To integrate with Google Cloud Platform DNS (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **GCP service account**: GCP service account with permissions to read logs (with "logging.logEntries.list" permission) is required for GCP Logging API. Also json file with service account key is required. See the documentation to learn more about [permissions](https://cloud.google.com/logging/docs/access-control#permissions_and_roles), [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys).
To integrate with Google Cloud Platform DNS (using Azure Functions) make sure yo
> This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-GCPDNSDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Google Cloud Platform Iam Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-cloud-platform-iam-using-azure-function.md
GCP_IAM_CL
To integrate with Google Cloud Platform IAM (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **GCP service account**: GCP service account with permissions to read logs is required for GCP Logging API. Also json file with service account key is required. See the documentation to learn more about [required permissions](https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions), [creating service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) and [creating service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys).
To integrate with Google Cloud Platform IAM (using Azure Functions) make sure yo
> This connector uses Azure Functions to connect to the GCP API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-GCPIAMDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Google Workspace G Suite Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-function.md
GWorkspace_ReportsAPI_user_accounts_CL
To integrate with Google Workspace (G Suite) (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **GooglePickleString** is required for REST API. [See the documentation to learn more about API](https://developers.google.com/admin-sdk/reports/v1/reference/activities). Please find the instructions to obtain the credentials in the configuration section below. You can check all [requirements and follow the instructions](https://developers.google.com/admin-sdk/reports/v1/quickstart/python) from here as well.
To integrate with Google Workspace (G Suite) (using Azure Function) make sure yo
> This connector uses Azure Functions to connect to the Google Reports API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
Use the following step-by-step instructions to deploy the Google Workspace data
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-GWorkspaceReportsAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Holm Security Asset Data Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/holm-security-asset-data-using-azure-function.md
web_assets_Cl
To integrate with Holm Security Asset Data (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Holm Security API Token**: Holm Security API Token is required. [Holm Security API Token](https://support.holmsecurity.com/hc/en-us)
To integrate with Holm Security Asset Data (using Azure Functions) make sure you
> This connector uses Azure Functions to connect to a Holm Security Assets to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Holm Security API**
Use this method for automated deployment of the Holm Security connector.
[![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-holmsecurityassets-azuredeploy) 2. Select the preferred **Subscription**, **Resource Group** and **Location**. 3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, 'and/or Other required fields'.
->Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
sentinel Imperva Cloud Waf Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/imperva-cloud-waf-using-azure-function.md
ImpervaWAFCloud
To integrate with Imperva Cloud WAF (using Azure Functions make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **ImpervaAPIID**, **ImpervaAPIKey**, **ImpervaLogServerURI** are required for the API. [See the documentation to learn more about Setup Log Integration process](https://docs.imperva.com/bundle/cloud-application-security/page/settings/log-integration.htm#Setuplogintegration). Check all [requirements and follow the instructions](https://docs.imperva.com/bundle/cloud-application-security/page/settings/log-integration.htm#Setuplogintegration) for obtaining credentials. Please note that this connector uses CEF log event format. [More information](https://docs.imperva.com/bundle/cloud-application-security/page/more/log-file-structure.htm#Logfilestructure) about log format.
To integrate with Imperva Cloud WAF (using Azure Functions make sure you have:
> This connector uses Azure Functions to connect to the Imperva Cloud API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Imperva Cloud WAF data
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-impervawafcloud-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Juniper Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/juniper-idp.md
Install the agent on the Server.
2. Configure the logs to be collected
-Follow the configuration steps below to get Juniper IDP logs into Microsoft Sentinel. This configuration enriches events generated by Juniper IDP module to provide visibility on log source information for Juniper IDP logs. Refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
+Follow the configuration steps below to get Juniper IDP logs into Microsoft Sentinel. This configuration enriches events generated by Juniper IDP module to provide visibility on log source information for Juniper IDP logs. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
1. Download config file [juniper_idp.conf](https://aka.ms/sentinel-JuniperIDP-conf). 2. Login to the server where you have installed Azure Log Analytics agent. 3. Copy juniper_idp.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
sentinel Lookout Cloud Security For Microsoft Sentinel Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/lookout-cloud-security-for-microsoft-sentinel-using-azure-function.md
LookoutCloudSecurity_CL
To integrate with Lookout Cloud Security for Microsoft Sentinel (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
## Vendor installation instructions
To integrate with Lookout Cloud Security for Microsoft Sentinel (using Azure Fun
> This connector uses Azure Functions to connect to the Agari REST API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**Step-by-Step Instructions**
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-Lookout-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Lookout Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/lookout-using-azure-function.md
Lookout_CL
To integrate with Lookout (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Mobile Risk API Credentials/permissions**: **EnterpriseName** & **ApiKey** are required for Mobile Risk API. [See the documentation to learn more about API](https://enterprise.support.lookout.com/hc/en-us/articles/115002741773-Mobile-Risk-API-Guide). Check all [requirements and follow the instructions](https://enterprise.support.lookout.com/hc/en-us/articles/115002741773-Mobile-Risk-API-Guide#authenticatingwiththemobileriskapi) for obtaining credentials.
sentinel Mulesoft Cloudhub Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mulesoft-cloudhub-using-azure-function.md
MuleSoft_Cloudhub_CL
To integrate with MuleSoft Cloudhub (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **MuleSoftEnvId**, **MuleSoftAppName**, **MuleSoftUsername** and **MuleSoftPassword** are required for making API calls.
To integrate with MuleSoft Cloudhub (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use this method for automated deployment of the MuleSoft Cloudhub data connector
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-MuleSoftCloudhubAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Netskope Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-function.md
Netskope
To integrate with Netskope (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required
To integrate with Netskope (using Azure Function) make sure you have:
> This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Netskope and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Netskope/Parsers/Netskope.txt), on the second line of the query, enter the hostname(s) of your Netskope device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Netskope API**
This method provides an automated deployment of the Netskope connector using an
- Use the following schema for the `uri` value: `https://<Tenant Name>.goskope.com` Replace `<Tenant Name>` with your domain. - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion. - The default **Log Types** is set to pull all 6 available log types (`alert, page, application, audit, infrastructure, network`), remove any are not required.
+ - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy. 6. After successfully deploying the connector, download the Kusto Function to normalize the data fields. [Follow the steps](https://aka.ms/sentinelgithubparsersnetskope) to use the Kusto function alias, **Netskope**.
This method provides the step-by-step instructions to deploy the Netskope connec
> - Enter the URI that corresponds to your region. The `uri` value must follow the following schema: `https://<Tenant Name>.goskope.com` - There is no need to add subsquent parameters to the Uri, the Function App will dynamically append the parameteres in the proper format. > - Set the `timeInterval` (in minutes) to the default value of `5` to correspond to the default Timer Trigger of every `5` minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion. > - Set the `logTypes` to `alert, page, application, audit, infrastructure, network` - This list represents all the avaliable log types. Select the log types based on logging requirements, seperating each by a single comma.
-> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. 4. Once all application settings have been entered, click **Save**. 5. After successfully deploying the connector, download the Kusto Function to normalize the data fields. [Follow the steps](https://aka.ms/sentinelgithubparsersnetskope) to use the Kusto function alias, **Netskope**.
sentinel Okta Single Sign On Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/okta-single-sign-on-using-azure-function.md
Okta_CL
To integrate with Okta Single Sign-On (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Okta API Token**: An Okta API Token is required. See the documentation to learn more about the [Okta System Log API](https://developer.okta.com/docs/reference/api/system-log/).
To integrate with Okta Single Sign-On (using Azure Function) make sure you have:
> This connector has been updated, if you have previously deployed an earlier version, and want to update, please delete the existing Okta Azure Function before redeploying this version.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Okta SSO API**
This method provides an automated deployment of the Okta SSO connector using an
2. Select the preferred **Subscription**, **Resource Group** and **Location**. 3. Enter the **Workspace ID**, **Workspace Key**, **API Token** and **URI**. - Use the following schema for the `uri` value: `https://<OktaDomain>/api/v1/logs?since=` Replace `<OktaDomain>` with your domain. [Click here](https://developer.okta.com/docs/reference/api-overview/#url-namespace) for further details on how to identify your Okta domain namespace. There is no need to add a time value to the URI, the Function App will dynamically append the inital start time of logs to UTC 0:00 for the current UTC date as time value to the URI in the proper format.
+ - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
Use the following step-by-step instructions to deploy the Okta SSO connector man
uri logAnalyticsUri (optional) - Use the following schema for the `uri` value: `https://<OktaDomain>/api/v1/logs?since=` Replace `<OktaDomain>` with your domain. [Click here](https://developer.okta.com/docs/reference/api-overview/#url-namespace) for further details on how to identify your Okta domain namespace. There is no need to add a time value to the URI, the Function App will dynamically append the inital start time of logs to UTC 0:00 for the current UTC date as time value to the URI in the proper format.
+ - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
- Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: https://<CustomerId>.ods.opinsights.azure.us. 4. Once all application settings have been entered, click **Save**.
sentinel Onelogin Iam Platform Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/onelogin-iam-platform-using-azure-function.md
OneLogin
To integrate with OneLogin IAM Platform (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Webhooks Credentials/permissions**: **OneLoginBearerToken**, **Callback URL** are required for working Webhooks. See the documentation to learn more about [configuring Webhooks](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010469).You need to generate **OneLoginBearerToken** according to your security requirements and use it in **Custom Headers** section in format: Authorization: Bearer **OneLoginBearerToken**. Logs Format: JSON Array.
To integrate with OneLogin IAM Platform (using Azure Functions) make sure you ha
> This data connector uses Azure Functions based on HTTP Trigger for waiting POST requests with logs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the OneLogin data connecto
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-OneLogin-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Oracle Cloud Infrastructure Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-cloud-infrastructure-using-azure-function.md
OCI_Logs_CL
To integrate with Oracle Cloud Infrastructure (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **OCI API Credentials**: **API Key Configuration File** and **Private Key** are required for OCI API connection. See the documentation to learn more about [creating keys for API access](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm)
To integrate with Oracle Cloud Infrastructure (using Azure Functions) make sure
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the OCI data connector man
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Palo Alto Prisma Cloud Cspm Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function.md
PaloAltoPrismaCloudAudit_CL
To integrate with Palo Alto Prisma Cloud CSPM (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Palo Alto Prisma Cloud API Credentials**: **Prisma Cloud API Url**, **Prisma Cloud Access Key ID**, **Prisma Cloud Secret Key** are required for Prisma Cloud API connection. See the documentation to learn more about [creating Prisma Cloud Access Key](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/create-access-keys.html) and about [obtaining Prisma Cloud API Url](https://prisma.pan.dev/api/cloud/api-urls)
To integrate with Palo Alto Prisma Cloud CSPM (using Azure Function) make sure y
> This connector uses Azure Functions to connect to the Palo Alto Prisma Cloud REST API to pull logs into Microsoft sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Prisma Cloud data conn
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-PaloAltoPrismaCloud-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Proofpoint On Demand Email Security Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/proofpoint-on-demand-email-security-using-azure-function.md
ProofpointPOD
To integrate with Proofpoint On Demand Email Security (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Websocket API Credentials/permissions**: **ProofpointClusterID**, **ProofpointToken** is required. [See the documentation to learn more about API](https://proofpointcommunities.force.com/community/s/article/Proofpoint-on-Demand-Pod-Log-API).
To integrate with Proofpoint On Demand Email Security (using Azure Function) mak
> This connector uses Azure Functions to connect to the Proofpoint Websocket API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
>This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-proofpointpod-parser) to create the Kusto functions alias, **ProofpointPOD**
Use the following step-by-step instructions to deploy the Proofpoint On Demand E
**1. Deploy a Function App**
-> NOTE:You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-proofpointpod-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Proofpoint Tap Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/proofpoint-tap-using-azure-function.md
ProofPointTAPMessagesBlocked_CL
To integrate with Proofpoint TAP (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Proofpoint TAP API Key**: A Proofpoint TAP API username and password is required. [See the documentation to learn more about Proofpoint SIEM API](https://help.proofpoint.com/Threat_Insight_Dashboard/API_Documentation/SIEM_API).
To integrate with Proofpoint TAP (using Azure Function) make sure you have:
> This connector uses Azure Functions to connect to Proofpoint TAP to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Proofpoint TAP API**
Use this method for automated deployment of the Proofpoint TAP connector.
2. Select the preferred **Subscription**, **Resource Group** and **Location**. 3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, and validate the **Uri**. > - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
This method provides the step-by-step instructions to deploy the Proofpoint TAP
logAnalyticsUri (optional) > - Set the `uri` value to: `https://tap-api-v2.proofpoint.com/v2/siem/all?format=json&sinceSeconds=300` > - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us` 4. Once all application settings have been entered, click **Save**.
sentinel Qualys Vm Knowledgebase Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vm-knowledgebase-using-azure-function.md
QualysKB
To integrate with Qualys VM KnowledgeBase (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Qualys API Key**: A Qualys VM API username and password is required. [See the documentation to learn more about Qualys VM API](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf).
To integrate with Qualys VM KnowledgeBase (using Azure Function) make sure you h
>This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-qualyskb-parser) to use the Kusto function alias, **QualysKB**
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Qualys API**
Use this method for automated deployment of the Qualys KB connector using an ARM
2. Select the preferred **Subscription**, **Resource Group** and **Location**. 3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password** , update the **URI**, and any additional URI **Filter Parameters** (This value should include a "&" symbol between each parameter and should not include any spaces) > - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348)
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy. - Note: If deployment failed due to the storage account name being taken, change the **Function Name** to a unique value and redeploy.
This method provides the step-by-step instructions to deploy the Qualys KB conne
logAnalyticsUri (optional) > - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348). The `uri` value must follow the following schema: `https://<API Server>/api/2.0` > - Add any additional filter parameters, for the `filterParameters` variable, that need to be appended to the URI. The `filterParameter` value should include a "&" symbol between each parameter and should not include any spaces.
-> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
- Use logAnalyticsUri to override the log analytics API endpoint for delegated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. 4. Once all application settings have been entered, click **Save**.
sentinel Qualys Vulnerability Management Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vulnerability-management-using-azure-function.md
QualysHostDetection_CL
To integrate with Qualys Vulnerability Management (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Qualys API Key**: A Qualys VM API username and password is required. [See the documentation to learn more about Qualys VM API](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf).
To integrate with Qualys Vulnerability Management (using Azure Function) make su
> This connector uses Azure Functions to connect to Qualys VM to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Qualys VM API**
Use this method for automated deployment of the Qualys VM connector using an ARM
3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password** , update the **URI**, and any additional URI **Filter Parameters** (each filter should be separated by an "&" symbol, no spaces.) > - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348) -- There is no need to add a time suffix to the URI, the Function App will dynamically append the Time Value to the URI in the proper format. - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
Use the following step-by-step instructions to deploy the Quayls VM connector ma
> - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348). The `uri` value must follow the following schema: `https://<API Server>/api/2.0/fo/asset/host/vm/detection/?action=list&vm_processed_after=` -- There is no need to add a time suffix to the URI, the Function App will dynamically append the Time Value to the URI in the proper format. > - Add any additional filter parameters, for the `filterParameters` variable, that need to be appended to the URI. Each parameter should be seperated by an "&" symbol and should not include any spaces. > - Set the `timeInterval` (in minutes) to the value of `5` to correspond to the Timer Trigger of every `5` minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. 4. Once all application settings have been entered, click **Save**.
Due to the potentially large amount of Qualys host detection data being ingested
3. Add the line `"functionTimeout": "00:10:00",` above the `managedDependancy` line 4. Ensure **SAVED** appears on the top right corner of the editor, then exit the editor.
-> NOTE: If a longer timeout duration is required, consider upgrading to an [App Service Plan](https://learn.microsoft.com/azure/azure-functions/functions-scale#timeout)
+> NOTE: If a longer timeout duration is required, consider upgrading to an [App Service Plan](/azure/azure-functions/functions-scale#timeout)
sentinel Rapid7 Insight Platform Vulnerability Management Reports Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rapid7-insight-platform-vulnerability-management-reports-using-azure-function.md
NexposeInsightVMCloud_vulnerabilities_CL
To integrate with Rapid7 Insight Platform Vulnerability Management Reports (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials**: **InsightVMAPIKey** is required for REST API. [See the documentation to learn more about API](https://docs.rapid7.com/insight/api-overview/). Check all [requirements and follow the instructions](https://docs.rapid7.com/insight/managing-platform-api-keys/) for obtaining credentials
To integrate with Rapid7 Insight Platform Vulnerability Management Reports (usin
> This connector uses Azure Functions to connect to the Insight VM API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
sentinel Rubrik Security Cloud Data Connector Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rubrik-security-cloud-data-connector-using-azure-function.md
Rubrik_ThreatHunt_Data_CL
To integrate with Rubrik Security Cloud data connector (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
## Vendor installation instructions
To integrate with Rubrik Security Cloud data connector (using Azure Function) ma
> This connector uses Azure Functions to connect to the Rubrik webhook which push its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
Use the following step-by-step instructions to deploy the Rubrik Sentinel data c
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-RubrikWebhookEvents-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Sailpoint Identitynow Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sailpoint-identitynow-using-azure-function.md
SailPointIDN_Triggers_CL
To integrate with SailPoint IdentityNow (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **SailPoint IdentityNow API Authentication Credentials**: TENANT_ID, CLIENT_ID and CLIENT_SECRET are required for authentication.
To integrate with SailPoint IdentityNow (using Azure Function) make sure you hav
> This connector uses Azure Functions to connect to the SailPoint IdentityNow REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the SailPoint IdentityNow API**
Use the following step-by-step instructions to deploy the SailPoint IdentityNow
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-sailpointidentitynow-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Salesforce Service Cloud Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/salesforce-service-cloud-using-azure-function.md
SalesforceServiceCloud
To integrate with Salesforce Service Cloud (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **Salesforce API Username**, **Salesforce API Password**, **Salesforce Security Token**, **Salesforce Consumer Key**, **Salesforce Consumer Secret** is required for REST API. [See the documentation to learn more about API](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm).
To integrate with Salesforce Service Cloud (using Azure Function) make sure you
> This connector uses Azure Functions to connect to the Salesforce Lightning Platform REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
>This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-SalesforceServiceCloud-parser) to create the Kusto functions alias **SalesforceServiceCloud**
Use the following step-by-step instructions to deploy the Salesforce Service Clo
**1. Deploy a Function App**
-> NOTE:You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-SalesforceServiceCloud-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Senservapro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/senservapro.md
let timeframe = 14d;
1. Setup the data connection
-Visit [Senserva Setup](https://www.senserva.com/senserva-setup/) for information on setting up the Senserva data connection, support, or any other questions. The Senserva installation will configure a Log Analytics Workspace for output. Deploy Microsoft Sentinel onto the configured Log Analytics Workspace to finish the data connection setup by following [this onboarding guide.](https://learn.microsoft.com/azure/sentinel/quickstart-onboard)
+Visit [Senserva Setup](https://www.senserva.com/senserva-setup/) for information on setting up the Senserva data connection, support, or any other questions. The Senserva installation will configure a Log Analytics Workspace for output. Deploy Microsoft Sentinel onto the configured Log Analytics Workspace to finish the data connection setup by following [this onboarding guide.](/azure/sentinel/quickstart-onboard)
sentinel Sentinelone Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sentinelone-using-azure-function.md
SentinelOne
To integrate with SentinelOne (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **SentinelOneAPIToken** is required. See the documentation to learn more about API on the `https://<SOneInstanceDomain>.sentinelone.net/api-doc/overview`.
To integrate with SentinelOne (using Azure Function) make sure you have:
> This connector uses Azure Functions to connect to the SentinelOne API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the SentinelOne Reports da
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-SentinelOneAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Slack Audit Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/slack-audit-using-azure-function.md
SlackAudit
To integrate with Slack Audit (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **SlackAPIBearerToken** is required for REST API. [See the documentation to learn more about API](https://api.slack.com/web#authentication). Check all [requirements and follow the instructions](https://api.slack.com/web#authentication) for obtaining credentials.
To integrate with Slack Audit (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the Slack REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Slack Audit data conne
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-SlackAuditAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Snowflake Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/snowflake-using-azure-function.md
Snowflake_CL
To integrate with Snowflake (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Snowflake Credentials**: **Snowflake Account Identifier**, **Snowflake User** and **Snowflake Password** are required for connection. See the documentation to learn more about [Snowflake Account Identifier](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#). Instructions on how to create user for this connector you can find below.
To integrate with Snowflake (using Azure Functions) make sure you have:
> This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the data connector manuall
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-SnowflakeDataConnector-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Sophos Endpoint Protection Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sophos-endpoint-protection-using-azure-function.md
SophosEP_CL
To integrate with Sophos Endpoint Protection (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **API token** is required. [See the documentation to learn more about API token](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/ep_ApiTokenManagement.html)
To integrate with Sophos Endpoint Protection (using Azure Functions) make sure y
> This connector uses Azure Functions to connect to the Sophos Central APIs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Sophos Endpoint Protec
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-SophosEP-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Tenable Io Vulnerability Management Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md
Tenable_IO_Assets_CL
To integrate with Tenable.io Vulnerability Management (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) for obtaining credentials.
To integrate with Tenable.io Vulnerability Management (using Azure Function) mak
> This connector uses Azure Durable Functions to connect to the Tenable.io API to pull [assets](https://developer.tenable.com/reference#exports-assets-download-chunk) and [vulnerabilities](https://developer.tenable.com/reference#exports-vulns-request-export) at a regular interval into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Tenable.io Vulnerabili
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-TenableIO-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Thehive Project Thehive Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/thehive-project-thehive-using-azure-function.md
TheHive
To integrate with TheHive Project - TheHive (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Webhooks Credentials/permissions**: **TheHiveBearerToken**, **Callback URL** are required for working Webhooks. See the documentation to learn more about [configuring Webhooks](https://docs.thehive-project.org/thehive/installation-and-configuration/configuration/webhooks/).
To integrate with TheHive Project - TheHive (using Azure Functions) make sure yo
> This data connector uses Azure Functions based on HTTP Trigger for waiting POST requests with logs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the TheHive data connector
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-TheHive-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Trend Micro Vision One Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-vision-one-using-azure-function.md
TrendMicro_XDR_WORKBENCH_CL
To integrate with Trend Micro Vision One (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Trend Micro Vision One API Token**: A Trend Micro Vision One API Token is required. See the documentation to learn more about the [Trend Micro Vision One API](https://automation.trendmicro.com/xdr/home).
To integrate with Trend Micro Vision One (using Azure Function) make sure you ha
> This connector uses Azure Functions to connect to the Trend Micro Vision One API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Trend Micro Vision One API**
This method provides an automated deployment of the Trend Micro Vision One conne
2. Select the preferred **Subscription**, **Resource Group** and **Location**. 3. Enter a unique **Function Name**, **Workspace ID**, **Workspace Key**, **API Token** and **Region Code**. - Note: Provide the appropriate region code based on where your Trend Micro Vision One instance is deployed: us, eu, au, in, sg, jp
+ - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
sentinel Ubiquiti Unifi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ubiquiti-unifi.md
Install the agent on the Server to which the Ubiquiti logs are forwarder from Ub
2. Configure the logs to be collected
-Follow the configuration steps below to get Ubiquiti logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
+Follow the configuration steps below to get Ubiquiti logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
1. Configure log forwarding on your Ubiquiti controller: i. Go to Settings > System Setting > Controller Configuration > Remote Logging and enable the Syslog and Debugging (optional) logs (Refer to [User Guide](https://dl.ui.com/guides/UniFi/UniFi_Controller_V5_UG.pdf) for detailed instructions).
sentinel Vmware Carbon Black Cloud Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-carbon-black-cloud-using-azure-function.md
CarbonBlackNotifications_CL
To integrate with VMware Carbon Black Cloud (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **VMware Carbon Black API Key(s)**: Carbon Black API and/or SIEM Level API Key(s) are required. See the documentation to learn more about the [Carbon Black API](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/). - A Carbon Black **API** access level API ID and Key is required for [Audit](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/#audit-log-events) and [Event](https://developer.carbonblack.com/reference/carbon-black-cloud/platform/latest/data-forwarder-config-api/) logs. - A Carbon Black **SIEM** access level API ID and Key is required for [Notification](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/#notifications) alerts.
To integrate with VMware Carbon Black Cloud (using Azure Function) make sure you
> This connector uses Azure Functions to connect to VMware Carbon Black to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the VMware Carbon Black API**
This method provides an automated deployment of the VMware Carbon Black connecto
> - Enter the URI that corresponds to your region. The complete list of API URLs can be [found here](https://community.carbonblack.com/t5/Knowledge-Base/PSC-What-URLs-are-used-to-access-the-APIs/ta-p/67346) - The default **Time Interval** is set to pull the last five (5) minutes of data. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion. - Carbon Black requires a seperate set of API ID/Keys to ingest Notification alerts. Enter the SIEM API ID/Key values or leave blank, if not required.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
Use the following step-by-step instructions to deploy the VMware Carbon Black co
> - Enter the URI that corresponds to your region. The complete list of API URLs can be [found here](https://community.carbonblack.com/t5/Knowledge-Base/PSC-What-URLs-are-used-to-access-the-APIs/ta-p/67346). The `uri` value must follow the following schema: `https://<API URL>.conferdeploy.net` - There is no need to add a time suffix to the URI, the Function App will dynamically append the Time Value to the URI in the proper format. > - Set the `timeInterval` (in minutes) to the default value of `5` to correspond to the default Timer Trigger of every `5` minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion. > - Carbon Black requires a seperate set of API ID/Keys to ingest Notification alerts. Enter the `SIEMapiId` and `SIEMapiKey` values, if needed, or omit, if not required.
-> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) for further details.
+> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us` 4. Once all application settings have been entered, click **Save**.
sentinel Vmware Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md
Typically, you should install the agent on a different computer from the one on
2. Configure the logs to be collected
-Follow the configuration steps below to get vCenter server logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
+Follow the configuration steps below to get vCenter server logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
For vCenter Server logs, we have issues while parsing the data by OMS agent data using default settings. So we advice to capture the logs into custom table **vCenter_CL** using below instructions. 1. Login to the server where you have installed OMS agent.
sentinel Windows Firewall Events Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/windows-firewall-events-via-ama.md
ASimNetworkSessionLogs
To integrate with Windows Firewall Events via AMA (Preview) make sure you have: -- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](https://learn.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
## Vendor installation instructions
sentinel Workplace From Facebook Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/workplace-from-facebook-using-azure-function.md
Workplace_Facebook
To integrate with Workplace from Facebook (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **Webhooks Credentials/permissions**: WorkplaceAppSecret, WorkplaceVerifyToken, Callback URL are required for working Webhooks. See the documentation to learn more about [configuring Webhooks](https://developers.facebook.com/docs/workplace/reference/webhooks), [configuring permissions](https://developers.facebook.com/docs/workplace/reference/permissions).
To integrate with Workplace from Facebook (using Azure Functions) make sure you
> This data connector uses Azure Functions based on HTTP Trigger for waiting POST requests with logs to pull its logs into Azure Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Workplace data connect
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-WorkplaceFacebook-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Zero Networks Segment Audit Function Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zero-networks-segment-audit-function-using-azure-function.md
ZNSegmentAudit_CL
To integrate with Zero Networks Segment Audit (Function) (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials**: **Zero Networks Segment** **API Token** is required for REST API. See the API Guide.
To integrate with Zero Networks Segment Audit (Function) (using Azure Functions)
> This connector uses Azure Functions to connect to the Zero Networks REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
**STEP 1 - Configuration steps for the Zero Networks API**
Use the following step-by-step instructions to deploy the Zero Networks Segment
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-powershell#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-powershell#prerequisites) for Azure function development.
1. Download the [Azure Function App](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Zoom Reports Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zoom-reports-using-azure-function.md
Zoom
To integrate with Zoom Reports (using Azure Function) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://marketplace.zoom.us/docs/guides/auth/jwt). Check all [requirements and follow the instructions](https://marketplace.zoom.us/docs/guides/auth/jwt) for obtaining credentials.
To integrate with Zoom Reports (using Azure Function) make sure you have:
> This connector uses Azure Functions to connect to the Zoom API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
> [!NOTE]
Use the following step-by-step instructions to deploy the Zoom Reports data conn
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-ZoomAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
sentinel Zscaler Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zscaler-private-access.md
Install the agent on the Server where the Zscaler Private Access logs are forwar
2. Configure the logs to be collected
-Follow the configuration steps below to get Zscaler Private Access logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](https://learn.microsoft.com/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
+Follow the configuration steps below to get Zscaler Private Access logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
Zscaler Private Access logs are delivered via Log Streaming Service (LSS). Refer to [LSS documentation](https://help.zscaler.com/zpa/about-log-streaming-service) for detailed information 1. Configure [Log Receivers](https://help.zscaler.com/zpa/configuring-log-receiver). While configuring a Log Receiver, choose **JSON** as **Log Template**. 2. Download config file [zpa.conf](https://aka.ms/sentinel-ZscalerPrivateAccess-conf)
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
Before you start a search job, be aware of the following limitations:
- Limited to 100 search results tables per workspace. - Limited to 100 search job executions per day per workspace.
+Search jobs aren't currently supported for the following workspaces:
+
+- Customer-managed key enabled workspaces
+- Workspaces in the China East 2 region
+ To learn more, see [Search job in Azure Monitor](../azure-monitor/logs/search-jobs.md) in the Azure Monitor documentation. ## Restore historical data from archived logs
sentinel Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit.md
This article shows you how to enable and configure auditing for the Microsoft Se
> [!IMPORTANT] > We strongly recommend that any management of your SAP system is carried out by an experienced SAP system administrator. >
-> The steps in this article may vary, depending on your SAP sytem's version, and should be considered as a sample only.
+> The steps in this article may vary, depending on your SAP system's version, and should be considered as a sample only.
Some installations of SAP systems may not have audit log enabled by default. For best results in evaluating the performance and efficacy of the Microsoft Sentinel solution for SAP® applications, enable auditing of your SAP system and configure the audit parameters.
sentinel Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/search-jobs.md
Use a search job when you start an investigation to find specific events in logs up to seven years ago. You can search events across all your logs, including events in Analytics, Basic, and Archived log plans. Filter and look for events that match your criteria.
-For more information on search job concepts, see [Start an investigation by searching large datasets](investigate-large-datasets.md) and [Search jobs in Azure Monitor](../azure-monitor/logs/search-jobs.md).
+For more information on search job concepts and limitations, see [Start an investigation by searching large datasets](investigate-large-datasets.md) and [Search jobs in Azure Monitor](../azure-monitor/logs/search-jobs.md).
## Start a search job
To learn more, see the following topics.
- [Hunt with bookmarks](bookmarks.md) - [Restore archived logs](restore.md)-- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md)
+- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md)
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn about Accenture CTI integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-defense).
-### Anomali
--- [Learn how to import threat intelligence from Anomali ThreatStream into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-anomali-threatstream-feed-into-microsoft-sentinel/ba-p/3561742#M3787)- ### Cybersixgill Darkfeed - [Learn about Cybersixgill integration with Microsoft Sentinel @Cybersixgill](https://www.cybersixgill.com/partners/azure-sentinel/)
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
To export the list of impacted resources to an Excel file, select the **Export t
## Access impacted resources programmatically via an API
-You can get information about outage-impacted resources programmatically by using the Events API. For details on how to access this data, see the [API documentation](/rest/api/resourcehealth/2022-05-01/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP).
+You can get information about outage-impacted resources programmatically by using the Events API. For details on how to access this data, see the [API documentation](/rest/api/resourcehealth/2022-10-01/impacted-resources).
## Next steps - [Introduction to the Azure Service Health dashboard](service-health-overview.md)
service-health Impacted Resources Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-security.md
The following examples show a security incident with impacted resources from the
## Accessing Impacted Resources programmatically via an API
-Impacted resource information for security incidents can be retrieved programmatically using the Events API. To access the list of resources impacted by a security incident, users authorized with the above-mentioned roles can use the following endpoints.
+Impacted resource information for security incidents can be retrieved programmatically using the Events API. To access the list of resources impacted by a security incident, users authorized with the above-mentioned roles can use the endpoints below. For details on how to access this data, see the [API documentation](/rest/api/resourcehealth/2022-10-01/security-advisory-impacted-resources).
**Subscription**
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
following role permissions depending on the type of storage account:
The following steps describe how to add a role assignment to your storage accounts, one at a time. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. In the Azure portal, navigate to your Azure SQL Server page.
+1. In the Azure portal, navigate to the cache storage account you created.
1. Select **Access control (IAM)**.
site-recovery Azure To Azure Tutorial Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-failback.md
After VMs are reprotected, you can fail back to the primary region as needed.
3. On the overview page, select **Failover**. Since we're not doing a test failover this time, we're prompted to verify.
- [Page showing we agree to run failover without a test failover](./media/azure-to-azure-tutorial-failback/no-test.png)
+ ![Page showing we agree to run failover without a test failover](./media/azure-to-azure-tutorial-failback/no-test.png)
4. In **Failover**, note the direction from secondary to primary, and select a recovery point. The Azure VM in the target (primary region) is created using data from this point. - **Latest processed**: Uses the latest recovery point processed by Site Recovery. The time stamp is shown. No time is spent processing data, so it provides a low recovery time objective (RTO).
site-recovery How To Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-migrate-run-as-accounts-managed-identity.md
Last updated 04/04/2023
# Migrate from a Run As account to Managed Identities > [!IMPORTANT]
-> - Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use managed identities. For more information, see [migrating from an existing Run As accounts to managed identity](/articles/automation/automation-managed-identity-faq.md).
+> - Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use managed identities. For more information, see [migrating from an existing Run As accounts to managed identity](/azure/automation/automation-managed-identity-faq).
> - Delaying the feature has a direct impact on our support burden, as it would cause upgrades of mobility agent to fail. This article shows you how to migrate your runbooks to use a Managed Identities for Azure Site Recovery. Azure Automation Accounts are used by Azure Site Recovery customers to auto-update the agents of their protected virtual machines. Site Recovery creates Azure Automation Run As Accounts when you enable replication via the IaaS VM Blade and Recovery Services Vault.
To link an existing managed identity Automation account to your Recovery Service
## Next steps Learn more about:-- [Managed identities](/articles/active-directory/managed-identities-azure-resources/overview.md).
+- [Managed identities](/azure/active-directory/managed-identities-azure-resources/overview).
- [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing).
site-recovery Vmware Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-replication.md
Title: Troubleshoot replication issues for disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery | Microsoft Docs
+ Title: Troubleshoot replication issues for disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery
description: This article provides troubleshooting information for common replication issues during disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery. - Previously updated : 05/02/2022+ Last updated : 04/20/2023
This article describes some common issues and specific errors you might encounte
Site Recovery uses the [process server](vmware-physical-azure-config-process-server-overview.md#process-server) to receive and optimize replicated data, and send it to Azure.
-We recommend that you monitor the health of process servers in portal, to ensure that they are connected and working properly, and that replication is progressing for the source machines associated with the process server.
+We recommend that you monitor the health of process servers in portal, to ensure that they're connected and working properly, and that replication is progressing for the source machines that are associated with the process server.
- [Learn about](vmware-physical-azure-monitor-process-server.md) monitoring process servers. - [Review best practices](vmware-physical-azure-troubleshoot-process-server.md#best-practices-for-process-server-deployment)
Initial and ongoing replication failures often are caused by connectivity issues
To solve these issues, [troubleshoot connectivity and replication](vmware-physical-azure-troubleshoot-process-server.md#check-connectivity-and-replication). --- ## Step 3: Troubleshoot source machines that aren't available for replication When you try to select the source machine to enable replication by using Site Recovery, the machine might not be available for one of the following reasons:
When you try to select the source machine to enable replication by using Site Re
* **Two virtual machines with same instance UUID**: If two virtual machines under the vCenter have the same instance UUID, the first virtual machine discovered by the configuration server is shown in the Azure portal. To resolve this issue, ensure that no two virtual machines have the same instance UUID. This scenario is commonly seen in instances where a backup VM becomes active and is logged into our discovery records. Refer to [Azure Site Recovery VMware-to-Azure: How to clean up duplicate or stale entries](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx) to resolve. * **Incorrect vCenter user credentials**: Ensure that you added the correct vCenter credentials when you set up the configuration server by using the OVF template or unified setup. To verify the credentials that you added during setup, see [Modify credentials for automatic discovery](vmware-azure-manage-configuration-server.md#modify-credentials-for-automatic-discovery). * **vCenter insufficient privileges**: If the permissions provided to access vCenter don't have the required permissions, failure to discover virtual machines might occur. Ensure that the permissions described in [Prepare an account for automatic discovery](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery) are added to the vCenter user account.
-* **Azure Site Recovery management servers**: If the virtual machine is used as management server under one or more of the following roles - Configuration server /scale-out process server / Master target server, then you will not be able to choose the virtual machine from portal. Managements servers cannot be replicated.
+* **Azure Site Recovery management servers**: If the virtual machine is used as management server under one or more of the following roles - Configuration server /scale-out process server / Master target server, then you won't be able to choose the virtual machine from portal. Managements servers can't be replicated.
* **Already protected/failed over through Azure Site Recovery services**: If the virtual machine is already protected or failed over through Site Recovery, the virtual machine isn't available to select for protection in the portal. Ensure that the virtual machine you're looking for in the portal isn't already protected by any other user or under a different subscription.
-* **vCenter not connected**: Check if vCenter is in connected state. To verify, go to Recovery Services vault > Site Recovery Infrastructure > Configuration Servers > Click on respective configuration server > a blade opens on your right with details of associated servers. Check if vCenter is connected. If it's in a "Not Connected" state, resolve the issue and then [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, virtual machine will be listed on the portal.
-* **ESXi powered off**: If ESXi host under which the virtual machine resides is in powered off state, then virtual machine will not be listed or will not be selectable on the Azure portal. Power on the ESXi host, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, virtual machine will be listed on the portal.
-* **Pending reboot**: If there is a pending reboot on the virtual machine, then you will not be able to select the machine on Azure portal. Ensure to complete the pending reboot activities, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). After this, virtual machine will be listed on the portal.
-* **IP not found or Machine does not have IP address**: If the virtual machine doesn't have a valid IP address associated with it, then you will not be able to select the machine on Azure portal. Ensure to assign a valid IP address to the virtual machine, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). It could also be caused if the machine does not have a valid IP address associated with one of its NIC's. Either assign a valid IP address to all NIC's or remove the NIC that's missing the IP. After this, virtual machine will be listed on the portal.
-
+* **vCenter not connected**: Check if vCenter is in connected state. To verify, go to Recovery Services vault > Site Recovery Infrastructure > Configuration Servers > Click on respective configuration server > a blade opens on your right with details of associated servers. Check if vCenter is connected. If it's in a "*Not Connected*" state, resolve the issue, and then [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, the virtual machine is listed on the portal.
+* **ESXi powered off**: If ESXi host under which the virtual machine resides is in powered off state, then virtual machine isn't listed or cannot be selected on the Azure portal. Power on the ESXi host, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server) on the portal. After this, the virtual machine is listed on the portal.
+* **Pending reboot**: If there's a pending reboot on the virtual machine, then you won't be able to select the machine on Azure portal. Ensure to complete the pending reboot activities, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). After this, the virtual machine is listed on the portal.
+* **IP not found or Machine does not have IP address**: If the virtual machine doesn't have a valid IP address associated with it, then you won't be able to select the machine on Azure portal. Ensure to assign a valid IP address to the virtual machine, [refresh the configuration server](vmware-azure-manage-configuration-server.md#refresh-configuration-server). It could also be caused if the machine does not have a valid IP address associated with one of its NICs. Either assign a valid IP address to all NICs or remove the NIC that's missing the IP. After this, virtual machine will be listed on the portal.
### Troubleshoot protected virtual machines greyed out in the portal Virtual machines that are replicated under Site Recovery aren't available in the Azure portal if there are duplicate entries in the system. [Learn more](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx) about deleting stale entries and resolving the issue.
-Another reason could be that the machine was cloned. When machines move between hypervisor and if BIOS ID changes, then the mobility agent blocks replication. Replication of cloned machines is not supported by Site Recovery.
+Another reason could be that the machine was cloned. When machines move between hypervisor and if BIOS ID changes, then the mobility agent blocks replication. Site Recovery doesn't support replication of cloned machines.
## No crash consistent recovery point available for the VM in the last 'XXX' minutes
-Some of the most common issues are listed below
+The following is a list of some of the common issues:
### Initial replication issues [error 78169]
Over an above ensuring that there are no connectivity, bandwidth or time sync re
### Source machines with high churn [error 78188]
-Possible Causes:
+**Possible causes**:
+ - The data change rate (write bytes/sec) on the listed disks of the virtual machine is more than the [Azure Site Recovery supported limits](site-recovery-vmware-deployment-planner-analyze-report.md#azure-site-recovery-limits) for the replication target storage account type.-- There is a sudden spike in the churn rate due to which high amount of data is pending for upload.
+- There's a sudden spike in the churn rate due to which high amount of data is pending for upload.
+
+**To resolve the issue**:
-To resolve the issue:
- Ensure that the target storage account type (Standard or Premium) is provisioned as per the churn rate requirement at source.-- If you are already replicating to a Premium managed disk (asrseeddisk type), ensure that the size of the disk supports the observed churn rate as per Site Recovery limits. You can increase the size of the asrseeddisk if required. Follow the below steps:
+- If you're already replicating to a Premium managed disk (asrseeddisk type), ensure that the size of the disk supports the observed churn rate as per Site Recovery limits. You can increase the size of the asrseeddisk if necessary. Follow the given steps:
+ - Navigate to the Disks blade of the impacted replicated machine and copy the replica disk name - Navigate to this replica managed disk
- - You may see a banner on the Overview blade saying that a SAS URL has been generated. Click on this banner and cancel the export. Ignore this step if you do not see the banner.
+ - You may see a banner on the Overview blade saying that a SAS URL has been generated. Click on this banner and cancel the export. Ignore this step if you don't see the banner.
- As soon as the SAS URL is revoked, go to Configuration blade of the Managed Disk and increase the size so that Azure Site Recovery supports the observed churn rate on source disk - If the observed churn is temporary, wait for a few hours for the pending data upload to catch up and to create recovery points.-- If the disk contains non-critical data like temporary logs, test data etc., consider moving this data elsewhere or completely exclude this disk from replication
+- If the disk contains noncritical data like temporary logs, test data etc., consider moving this data elsewhere or completely exclude this disk from replication
- If the problem continues to persist, use the Site Recovery [deployment planner](site-recovery-deployment-planner.md#overview) to help plan replication. ### Source machines with no heartbeat [error 78174]
-This happens when Azure Site Recovery Mobility agent on the Source Machine is not communicating with the Configuration Server (CS).
+This happens when Azure Site Recovery Mobility agent on the Source Machine they're communicating with the Configuration Server (CS).
To resolve the issue, use the following steps to verify the network connectivity from the source VM to the Config Server:
To resolve the issue, use the following steps to verify the network connectivity
*C:\Program Files (X86)\Microsoft Azure Site Recovery\agent\svagents\*.log* ### Process server with no heartbeat [error 806]
-In case there is no heartbeat from the Process Server (PS), check that:
+
+In case there's no heartbeat from the Process Server (PS), check that:
+ 1. PS VM is up and running 2. Check following logs on the PS for error details:
In case there is no heartbeat from the Process Server (PS), check that:
### Master target server with no heartbeat [error 78022]
-This happens when Azure Site Recovery Mobility agent on the Master Target is not communicating with the Configuration Server.
+This happens when Azure Site Recovery Mobility agent on the Master Target isn't communicating with the Configuration Server.
To resolve the issue, use the following steps to verify the service status: 1. Verify that the Master Target VM is running. 2. Sign in to the Master Target VM using an account that has administrator privileges.
- - Verify that the svagents service is running. If it is running, restart the service
+ - Verify that the svagents service is running. If it's running, restart the service
- Check the logs at the location for error details: *C:\Program Files (X86)\Microsoft Azure Site Recovery\agent\svagents\*.log*
To resolve the issue, use the following steps to verify the service status:
exit ```
+### Protection couldn't be successfully enabled for the virtual machine [error 78253]
+
+This error may occur if a replication policy has not been associated with the configuration server properly. It could also occur if the policy associated with the configuration server isn't valid.
+
+To confirm the cause of this error, navigate to the recovery vault > manage **Site Recovery infrastructure**, then view the replication policies for VMware and physical machines to check the status of the configured policies.
+
+To resolve the issue, you can associate the policy with the configuration server in use or create a new replication policy and associate it. If the policy is invalid, you can disassociate and delete it.
+ ## Error ID 78144 - No app-consistent recovery point available for the VM in the last 'XXX' minutes
-Enhancements have been made in mobility agent [9.23](vmware-physical-mobility-service-overview.md#mobility-service-agent-version-923-and-higher) & [9.27](site-recovery-whats-new.md#update-rollup-39) versions to handle VSS installation failure behaviors. Ensure that you are on the latest versions for best guidance on troubleshooting VSS failures.
+Enhancements have been made in mobility agent [9.23](vmware-physical-mobility-service-overview.md#mobility-service-agent-version-923-and-higher) & [9.27](site-recovery-whats-new.md#update-rollup-39) versions to handle VSS installation failure behaviors. Ensure that you're on the latest versions for best guidance on troubleshooting VSS failures.
-Some of the most common issues are listed below
+The following is a list of the most common issues:
#### Cause 1: Known issue in SQL server 2008/2008 R2
-**How to fix** : There is a known issue with SQL server 2008/2008 R2. Please refer this KB article [Azure Site Recovery Agent or other non-component VSS backup fails for a server hosting SQL Server 2008 R2](https://support.microsoft.com/help/4504103/non-component-vss-backup-fails-for-server-hosting-sql-server-2008-r2)
+
+**How to fix** : There's a known issue with SQL server 2008/2008 R2. Refer this KB article [Azure Site Recovery Agent or other noncomponent VSS backup fails for a server hosting SQL Server 2008 R2](https://support.microsoft.com/help/4504103/non-component-vss-backup-fails-for-server-hosting-sql-server-2008-r2)
#### Cause 2: Azure Site Recovery jobs fail on servers hosting any version of SQL Server instances with AUTO_CLOSE DBs
-**How to fix** : Refer Kb [article](https://support.microsoft.com/help/4504104/non-component-vss-backups-such-as-azure-site-recovery-jobs-fail-on-ser)
+**How to fix** : Refer KB [article](https://support.microsoft.com/help/4504104/non-component-vss-backups-such-as-azure-site-recovery-jobs-fail-on-ser)
#### Cause 3: Known issue in SQL Server 2016 and 2017
-**How to fix** : Refer Kb [article](https://support.microsoft.com/help/4493364/fix-error-occurs-when-you-back-up-a-virtual-machine-with-non-component)
+
+**How to fix** : Refer KB [article](https://support.microsoft.com/help/4493364/fix-error-occurs-when-you-back-up-a-virtual-machine-with-non-component)
#### Cause 4: App-Consistency not enabled on Linux servers
-**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options will be used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](./site-recovery-faq.yml) are the steps to enable it.
+
+**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options is used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](./site-recovery-faq.yml) are the steps to enable it.
### More causes due to VSS related issues:
To troubleshoot further, Check the files on the source machine to get the exact
*C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\Application Data\ApplicationPolicyLogs\vacp.log*
-How to locate the errors in the file?
+**How to locate the errors in the file?**
Search for the string "vacpError" by opening the vacp.log file in an editor `Ex: `**`vacpError`**`:220#Following disks are in FilteringStopped state [\\.\PHYSICALDRIVE1=5, ]#220|^|224#FAILED: CheckWriterStatus().#2147754994|^|226#FAILED to revoke tags.FAILED: CheckWriterStatus().#2147754994|^|`
-In the above example **2147754994** is the error code that tells you about the failure as shown below
+In the given example **2147754994** is the error code that tells you about the failure as follows:
#### VSS writer is not installed - Error 2147221164
-*How to fix*: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service is not installed, the application consistency snapshot creation fails with the error ID 0x80040154 "Class not registered". </br>
+*How to fix*: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service isn't installed, the application consistency snapshot creation fails with the error ID 0x80040154 "Class not registered". </br>
Refer [article for VSS writer installation troubleshooting](./vmware-azure-troubleshoot-push-install.md#vss-installation-failures) #### VSS writer is disabled - Error 2147943458
-**How to fix**: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service is disabled, the application consistency snapshot creation fails with the error ID "The specified service is disabled and cannot be started(0x80070422)". </br>
+**How to fix**: To generate application consistency tag, Azure Site Recovery uses Microsoft Volume Shadow copy Service (VSS). It installs a VSS Provider for its operation to take app consistency snapshots. This VSS Provider is installed as a service. In case the VSS Provider service is disabled, the application consistency snapshot creation fails with the error ID "The specified service is disabled and can't be started(0x80070422)". </br>
- If VSS is disabled, - Verify that the startup type of the VSS Provider service is set to **Automatic**.
Verify that the startup type of the VSS Provider service is set to **Automatic**
This error occurs when trying to enable replication and the application folders don't have enough permissions.
-**How to fix**: To resolve this issue, make sure the IUSR user has owner role for all the below mentioned folders -
+**How to fix**: To resolve this issue, make sure the IUSR user has owner role for all the following folders -
- *C\ProgramData\Microsoft Azure Site Recovery\private* - The installation directory. For example, if installation directory is F drive, then provide the correct permissions to -
This error occurs when trying to enable replication and the application folders
- *C:\thirdparty\rrdtool-1.2.15-win32-perl58\rrdtool\Release\** ## Troubleshoot and handle time changes on replicated servers
-This error occurs when the source machine's time moves forward and then moves back in short time, to correct the change. You may not notice the change as the time is corrected very quickly.
+This error occurs when the source machine's time moves forward and then moves back in short time, to correct the change. You may not notice the change as the time is corrected quickly.
**How to fix**:
-To resolve this issue, wait till system time crosses the skewed future time. Another option is to disable and enable replication once again, which is only feasible for forward replication (data replicated from on-premises to Azure) and is not applicable for reverse replication (data replicated from Azure to on-premises).
+To resolve this issue, wait until the system time crosses the skewed future time. Another option is to disable and enable replication once again, which is only feasible for forward replication (data replicated from on-premises to Azure) and isn't applicable for reverse replication (data replicated from Azure to on-premises).
## Next steps
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
Title: Bind an Azure Cosmos DB to your application in Azure Spring Apps
-description: Learn how to bind Azure Cosmos DB to your application in Azure Spring Apps
+ Title: Connect an Azure Cosmos DB to your application in Azure Spring Apps
+description: Learn how to connect Azure Cosmos DB to your application in Azure Spring Apps
-# Bind an Azure Cosmos DB database to your application in Azure Spring Apps
+# Connect an Azure Cosmos DB database to your application in Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Apps. This article demonstrates how to bind your application to an Azure Cosmos DB database.
+Instead of manually configuring your Spring Boot applications, you can automatically connect selected Azure services to your applications by using Azure Spring Apps. This article demonstrates how to connect your application to an Azure Cosmos DB database.
## Prerequisites
-* A deployed Azure Spring Apps instance.
-* An Azure Cosmos DB account and a database.
-* The Azure Spring Apps extension for the Azure CLI.
-
-If you don't have a deployed Azure Spring Apps instance, follow the steps in the [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
+* An application deployed to Azure Spring Apps. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
+* An Azure Cosmos DB database instance.
+* [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
## Prepare your Java project
-1. Add one of the following dependencies to your application's pom.xml pom.xml file. Choose the dependency that is appropriate for your API type.
+1. Add one of the following dependencies to your application's *pom.xml* file. Choose the dependency that is appropriate for your API type.
* API type: NoSQL
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-starter-data-cosmos</artifactId>
- <version>4.7.0</version>
</dependency> ```
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
</dependency> ```
- * API type: Azure Table
-
- ```xml
- <dependency>
- <groupId>com.azure.spring</groupId>
- <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
- <version>4.7.0</version>
- </dependency>
- ```
- 1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
-## Bind your app to the Azure Cosmos DB
+## Connect your app to the Azure Cosmos DB
### [Service Connector](#tab/Service-Connector) #### Use the Azure CLI
-Use the following command to configure your Spring app to connect to a Cosmos SQL Database with a system-assigned managed identity:
+Use the Azure CLI to configure your Spring app to connect to a Cosmos SQL Database by using the `az spring connection create` command, as shown in the following example:
> [!NOTE] > Updating Azure Cosmos DB database settings can take a few minutes to complete.
az spring connection create cosmos-sql \
--target-resource-group $COSMOSDB_RESOURCE_GROUP \ --account $COSMOSDB_ACCOUNT_NAME \ --database $DATABASE_NAME \
- --system-assigned-identity
+ --secret
``` > [!NOTE]
Alternately, you can use the Azure portal to configure this connection by comple
1. Once the connection between your Spring apps and your Cosmos DB database has been generated, you can see it in the Service Connector page and select the unfold button to view the configured connection variables.
-### [Service Binding](#tab/Service-Binding)
-
-> [!NOTE]
-> We recommend using Service Connector instead of Service Binding to connect your app to your database. Service Binding is going to be deprecated in favor of Service Connector. For instructions, see the Service Connector tab.
-
-Azure Cosmos DB has five different API types that support binding. The following procedure shows how to use them:
-
-1. Create an Azure Cosmos DB database. Refer to the quickstart on [creating a database](../cosmos-db/create-cosmosdb-resources-portal.md) for help.
-
-1. Record the name of your database. For this procedure, the database name is **testdb**.
-
-1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cosmos DB. This application is the same one you updated or deployed in the previous step.
-
-1. Select **Service binding**, and select **Create service binding**. To fill out the form, select:
-
- * The **Binding type** value **Azure Cosmos DB**.
- * The API type.
- * Your database name.
- * The Azure Cosmos DB account.
-
- > [!NOTE]
- > If you are using Cassandra, use a key space for the database name.
-
-1. Restart the application by selecting **Restart** on the application page.
-
-1. To ensure the service is bound correctly, select the binding name and verify its details. The `property` field should be similar to this example:
-
- ```properties
- spring.cloud.azure.cosmos.endpoint=https://<some account>.documents.azure.com:443
- spring.cloud.azure.cosmos.key=abc******
- spring.cloud.azure.cosmos.database=testdb
- ```
- ### [Terraform](#tab/Terraform) The following Terraform script shows how to set up an app deployed to Azure Spring Apps with an Azure Cosmos DB account.
provider "azurerm" {
variable "application_name" { type = string description = "The name of your application"
- default = "demo-abc"
+ default = "demo-cosmosdb"
} resource "azurerm_resource_group" "example" {
resource "azurerm_spring_cloud_java_deployment" "example" {
cpu = "2" memory = "4Gi" }
- instance_count = 2
+ instance_count = 1
jvm_options = "-XX:+PrintGC" runtime_version = "Java_11"
resource "azurerm_spring_cloud_active_deployment" "example" {
## Next steps
-In this article, you learned how to bind your application in Azure Spring Apps to an Azure Cosmos DB database. To learn more about binding services to your application, see [Bind to an Azure Cache for Redis cache](./how-to-bind-redis.md).
+In this article, you learned how to connect your application in Azure Spring Apps to an Azure Cosmos DB database. To learn more about connecting services to your application, see [Connect to an Azure Cache for Redis cache](./how-to-bind-redis.md).
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
Title: How to bind an Azure Database for MySQL instance to your application in Azure Spring Apps
-description: Learn how to bind an Azure Database for MySQL instance to your application in Azure Spring Apps
+ Title: How to connect an Azure Database for MySQL instance to your application in Azure Spring Apps
+description: Learn how to connect an Azure Database for MySQL instance to your application in Azure Spring Apps
-# Bind an Azure Database for MySQL instance to your application in Azure Spring Apps
+# Connect an Azure Database for MySQL instance to your application in Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-With Azure Spring Apps, you can bind select Azure services to your applications automatically, instead of having to configure your Spring Boot application manually. This article shows you how to bind your application to your Azure Database for MySQL instance.
+With Azure Spring Apps, you can connect selected Azure services to your applications automatically, instead of having to configure your Spring Boot application manually. This article shows you how to connect your application to your Azure Database for MySQL instance.
## Prerequisites
With Azure Spring Apps, you can bind select Azure services to your applications
1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
-## Bind your app to the Azure Database for MySQL instance
+## Connect your app to the Azure Database for MySQL instance
### [Service Connector](#tab/Service-Connector)
Follow these steps to configure your Spring app to connect to an Azure Database
```azurecli az extension add --name serviceconnector-passwordless --upgrade ```
-
+ 1. Run the `az spring connection create` command, as shown in the following example. ```azurecli
Follow these steps to configure your Spring app to connect to an Azure Database
--target-resource-group $MYSQL_RESOURCE_GROUP \ --server $MYSQL_SERVER_NAME \ --database $DATABASE_NAME \
- --system-assigned-identity
- ```
-
-### [Service Binding](#tab/Service-Binding)
-
-> [!NOTE]
-> We recommend using Service Connector instead of Service Binding to connect your app to your database. Service Binding is going to be deprecated in favor of Service Connector. For instructions, see the Service Connector tab.
-
-1. Note the admin username and password of your Azure Database for MySQL account.
-
-1. Connect to the server, create a database named **testdb** from a MySQL client, and then create a new non-admin account.
-
-1. In the Azure portal, on your **Azure Spring Apps** service page, look for the **Application Dashboard**, and then select the application to bind to your Azure Database for MySQL instance. This is the same application that you updated or deployed in the previous step.
-
-1. Select **Service binding**, and then select the **Create service binding** button.
-
-1. Fill out the form, selecting **Azure MySQL** as the **Binding type**, using the same database name you used earlier, and using the same username and password you noted in the first step.
-
-1. Restart the app, and this binding should now work.
-
-1. To ensure that the service binding is correct, select the binding name, and then verify its detail. The `property` field should look like this:
-
- ```properties
- spring.datasource.url=jdbc:mysql://some-server.mysql.database.azure.com:3306/testdb?useSSL=true&requireSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
- spring.datasource.username=admin@some-server
- spring.datasource.password=abc******
- spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect
+ --system-identity mysql-identity-id=$AZ_IDENTITY_RESOURCE_ID
``` ### [Terraform](#tab/Terraform)
resource "azurerm_spring_cloud_active_deployment" "example" {
## Next steps
-In this article, you learned how to bind an application in Azure Spring Apps to an Azure Database for MySQL instance. To learn more about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
+In this article, you learned how to connect an application in Azure Spring Apps to an Azure Database for MySQL instance. To learn more about connecting services to an application, see [Connect an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
spring-apps How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-redis.md
Title: Bind Azure Cache for Redis to your application in Azure Spring Apps
-description: Learn how to bind Azure Cache for Redis to your application in Azure Spring Apps
+ Title: Connect Azure Cache for Redis to your application in Azure Spring Apps
+description: Learn how to connect Azure Cache for Redis to your application in Azure Spring Apps
-# Bind Azure Cache for Redis to your application in Azure Spring Apps
+# Connect Azure Cache for Redis to your application in Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Apps. This article shows how to bind your application to Azure Cache for Redis.
+Instead of manually configuring your Spring Boot applications, you can automatically connect selected Azure services to your applications by using Azure Spring Apps. This article shows how to connect your application to Azure Cache for Redis.
## Prerequisites
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. Update the current deployment using `az spring app update` or create a new deployment using `az spring app deployment create`.
-## Bind your app to the Azure Cache for Redis
+## Connect your app to the Azure Cache for Redis
### [Service Connector](#tab/Service-Connector)
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. On the **Basics** tab, for service type, select Cache for Redis. Choose a subscription and a Redis cache server. Fill in the Redis database name ("0" in this example) and under client type, select Java. Select **Next: Authentication**.
- 1. On the **Authentication** tab, choose **Connection string**. Service Connector will automatically retrieve the access key from your Redis database account. Select **Next: Networking**.
+ 1. On the **Authentication** tab, choose **Connection string**. Service Connector automatically retrieves the access key from your Redis database account. Select **Next: Networking**.
1. On the **Networking** tab, select **Configure firewall rules to enable access to target service**, then select **Review + Create**.
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. Once the connection between your Spring app your Redis database has been generated, you can see it in the Service Connector page and select the unfold button to view the configured connection variables.
-### [Service Binding](#tab/Service-Binding)
-
-> [!NOTE]
-> We recommend using Service Connector instead of Service Binding to connect your app to your database. Service Binding is going to be deprecated in favor of Service Connector. For instructions, see the Service Connector tab.
-
-1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cache for Redis. This application is the same one you updated or deployed in the previous step.
-
-1. Select **Service binding** and select **Create service binding**. Fill out the form, being sure to select the **Binding type** value **Azure Cache for Redis**, your Azure Cache for Redis server, and the **Primary** key option.
-
-1. Restart the app. The binding should now work.
-
-1. To ensure the service binding is correct, select the binding name and verify its details. The `property` field should look like this:
-
- ```properties
- spring.redis.host=some-redis.redis.cache.windows.net
- spring.redis.port=6380
- spring.redis.password=abc******
- spring.redis.ssl=true
- ```
- ### [Terraform](#tab/Terraform) The following Terraform script shows how to set up an Azure Spring Apps app with Azure Cache for Redis.
resource "azurerm_spring_cloud_active_deployment" "example" {
## Next steps
-In this article, you learned how to bind your application in Azure Spring Apps to Azure Cache for Redis. To learn more about binding services to your application, see [Bind to an Azure Database for MySQL instance](./how-to-bind-mysql.md).
+In this article, you learned how to connect your application in Azure Spring Apps to Azure Cache for Redis. To learn more about connecting services to your application, see [Connect to an Azure Database for MySQL instance](./how-to-bind-mysql.md).
spring-apps Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-circuit-breaker.md
az spring app create -n user-service --assign-endpoint
az spring app create -n recommendation-service az spring app create -n hystrix-turbine --assign-endpoint
-az spring app deploy -n user-service --jar-path user-service/target/user-service.jar
-az spring app deploy -n recommendation-service --jar-path recommendation-service/target/recommendation-service.jar
-az spring app deploy -n hystrix-turbine --jar-path hystrix-turbine/target/hystrix-turbine.jar
+az spring app deploy -n user-service --artifact-path user-service/target/user-service.jar
+az spring app deploy -n recommendation-service --artifact-path recommendation-service/target/recommendation-service.jar
+az spring app deploy -n hystrix-turbine --artifact-path hystrix-turbine/target/hystrix-turbine.jar
``` ## Verify your apps
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
Last updated 07/10/2020
This article shows you how to create a managed identity for an Azure Spring Apps app and use it to invoke HTTP triggered Functions.
-Both Azure Functions and App Services have built in support for Azure Active Directory (Azure AD) authentication. By using this built-in authentication capability along with Managed Identities for Azure Spring Apps, we can invoke RESTful services using modern OAuth semantics. This method doesn't require storing secrets in code and provides more granular controls for controlling access to external resources.
+Both Azure Functions and App Services have built in support for Azure Active Directory (Azure AD) authentication. By using this built-in authentication capability along with Managed Identities for Azure Spring Apps, you can invoke RESTful services using modern OAuth semantics. This method doesn't require storing secrets in code and provides more granular controls for controlling access to external resources.
## Prerequisites
-* [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* [Install the Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli)
-* [Install Maven 3.0 or higher](https://maven.apache.org/download.cgi)
-* [Install the Azure Functions Core Tools version 3.0.2009 or higher](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools)
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
+- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or higher.
+- [Install the Azure Functions Core Tools](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools) version 3.0.2009 or higher.
## Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group to contain both the Function app and Spring Cloud using the command [az group create](/cli/azure/group#az-group-create):
+A resource group is a logical container into which Azure resources are deployed and managed. Use the following command to create a resource group to contain a Function app. For more information, see the [az group create](/cli/azure/group#az-group-create) command.
```azurecli
-az group create --name myResourceGroup --location eastus
+az group create --name <resource-group-name> --location <location>
```
-## Create a Function App
+## Create a Function app
-To create a Function app you must first create a backing storage account, use the command [az storage account create](/cli/azure/storage/account#az-storage-account-create):
+To create a Function app, you must first create a backing storage account. You can use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command.
> [!IMPORTANT]
-> Each Function app and Storage Account must have a unique name. Replace *\<your-functionapp-name>* with the name of your Function app and *\<your-storageaccount-name>* with the name of your Storage Account in the following examples.
+> Each Function app and storage account must have a unique name.
+
+Use the following command to create the storage account. Replace *\<function-app-name>* with the name of your Function app and *\<storage-account-name>* with the name of your storage account.
```azurecli az storage account create \
- --resource-group myResourceGroup \
- --name <your-storageaccount-name> \
- --location eastus \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --location <location> \
--sku Standard_LRS ```
-After the Storage Account is created, you can create the Function app.
+After the storage account is created, use the following command to create the Function app.
```azurecli az functionapp create \
- --resource-group myResourceGroup \
- --name <your-functionapp-name> \
- --consumption-plan-location eastus \
+ --resource-group <resource-group-name> \
+ --name <function-app-name> \
+ --consumption-plan-location <location> \
--os-type windows \ --runtime node \
- --storage-account <your-storageaccount-name> \
+ --storage-account <storage-account-name> \
--functions-version 3 ```
-Make a note of the returned `hostNames` value, which is in the format *https://\<your-functionapp-name>.azurewebsites.net*. You use this value in a following step.
+Make a note of the returned `hostNames` value, which is in the format *https://\<your-functionapp-name>.azurewebsites.net*. Use this value in the Function app's root URL for testing the Function app.
+
+## Enable Azure Active Directory authentication
-## Enable Azure Active Directory Authentication
+Use the following steps to enable Azure Active Directory authentication to access your Function app.
-Access the newly created Function app from the [Azure portal](https://portal.azure.com) and select **Authentication / Authorization** from the settings menu. Enable App Service Authentication and set the **Action to take when request is not authenticated** to **Log in with Azure Active Directory**. This setting ensures that all unauthenticated requests are denied (401 response).
+1. In the Azure portal, navigate to your resource group and then open the Function app you created.
+1. In the navigation pane, select **Authentication** and then select **Add identity provider** on the main pane.
+1. On the **Add an identity provider** page, select **Microsoft** from the **Identity provider** dropdown menu.
+ :::image type="content" source="media/spring-cloud-tutorial-managed-identities-functions/add-identity-provider.png" alt-text="Screenshot of the Azure portal showing the Add an identity provider page with Microsoft highlighted in the identity provider dropdown menu." lightbox="media/spring-cloud-tutorial-managed-identities-functions/add-identity-provider.png":::
-Under **Authentication Providers**, select **Azure Active Directory** to configure the application registration. Selecting **Express Management Mode** automatically creates an application registration in your Azure AD tenant with the correct configuration.
+1. Select **Add**.
+1. For the **Basics** settings on the **Add an identity provider** page, set **Supported account types** to **Any Azure AD directory - Multi-tenant**.
+1. Set **Unauthenticated requests** to **HTTP 401 Unauthorized: recommended for APIs**. This setting ensures that all unauthenticated requests are denied (401 response).
+ :::image type="content" source="media/spring-cloud-tutorial-managed-identities-functions/identity-provider-settings.png" alt-text="Screenshot of the Azure portal showing the settings page for adding an identity provider. This page highlights the 'supported account types' setting set to the 'Any Azure AD directory Multi tenant' option and also highlights the 'Unauthenticated requests' setting set to the 'HTTP 401 Unauthorized recommended for APIs' option." lightbox="media/spring-cloud-tutorial-managed-identities-functions/identity-provider-settings.png":::
-After you save the settings, the function app restarts and all subsequent requests are prompted to log in via Azure AD. You can test that unauthenticated requests are now being rejected by navigating to the function apps root URL (returned in the `hostNames` output in a previous step). You should be redirected to your organizations Azure AD login screen.
+1. Select **Add**.
-## Create an HTTP Triggered Function
+After you add the settings, the Function app restarts and all subsequent requests are prompted to sign in through Azure AD. You can test that unauthenticated requests are currently being rejected with the Function app's root URL (returned in the `hostNames` output of the `az functionapp create` command). You should then be redirected to your organization's Azure Active Directory sign-in screen.
-In an empty local directory, create a new function app and add an HTTP triggered function.
+## Create an HTTP triggered function
+
+In an empty local directory, use the following commands to create a new function app and add an HTTP triggered function.
```console func init --worker-runtime node func new --template HttpTrigger --name HttpTrigger ```
-By default, Functions use key-based authentication to secure HTTP endpoints. Since we're enabling Azure AD authentication to secure access to the Functions, we want to [set the function auth level to anonymous](../azure-functions/functions-bindings-http-webhook-trigger.md#secure-an-http-endpoint-in-production) in the *function.json* file.
+By default, functions use key-based authentication to secure HTTP endpoints. To enable Azure AD authentication to secure access to the functions, set the `authLevel` key to `anonymous` in the *function.json* file, as shown in the following example:
```json {
By default, Functions use key-based authentication to secure HTTP endpoints. Sin
} ```
-You can now publish the app to the [Function app](#create-a-function-app) instance created in the previous step.
+For more information, see the [Secure an HTTP endpoint in production](../azure-functions/functions-bindings-http-webhook-trigger.md#secure-an-http-endpoint-in-production) section of [Azure Functions HTTP trigger](../azure-functions/functions-bindings-http-webhook-trigger.md).
+
+Use the following command to publish the app to the instance created in the previous step:
```console
-func azure functionapp publish <your-functionapp-name>
+func azure functionapp publish <function-app-name>
```
-The output from the publish command should list the URL to your newly created function.
+The output from the publish command should list the URL to your newly created function, as shown in the following output:
```output Deployment completed successfully. Syncing triggers... Functions in <your-functionapp-name>: HttpTrigger - [httpTrigger]
- Invoke url: https://<your-functionapp-name>.azurewebsites.net/api/httptrigger
+ Invoke url: https://<function-app-name>.azurewebsites.net/api/httptrigger
```
-## Create Azure Spring Apps service and app
+## Create an Azure Spring Apps service instance and application
-After installing the spring extension, create an Azure Spring Apps instance with the Azure CLI command `az spring create`.
+Use the following commands to add the spring extension and to create a new instance of Azure Spring Apps.
```azurecli az extension add --upgrade --name spring az spring create \
- --resource-group myResourceGroup \
- --name mymsispringcloud \
- --location eastus
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-instance-name> \
+ --location <location>
```
-The following example creates an app named `msiapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter.
+Use the following command to create an application named `msiapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter.
```azurecli az spring app create \
- --resource-group "myResourceGroup" \
- --service "mymsispringcloud" \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name "msiapp" \ --assign-endpoint true \ --assign-identity
az spring app create \
## Build sample Spring Boot app to invoke the Function
-This sample invokes the HTTP triggered function by first requesting an access token from the [MSI endpoint](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http) and using that token to authenticate the Function http request.
+This sample invokes the HTTP triggered function by first requesting an access token from the MSI endpoint and using that token to authenticate the function HTTP request. For more information, see the [Get a token using HTTP](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http) section of [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
-1. Clone the sample project.
+1. Use the following command clone the sample project.
```bash git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git ```
-1. Specify your function URI and the trigger name in your app properties.
+1. Use the following command to specify your function URI and the trigger name in your app properties.
```bash cd Azure-Spring-Cloud-Samples/managed-identity-function vim src/main/resources/application.properties ```
- To use managed identity for Azure Spring Apps apps, add properties with the following content to *src/main/resources/application.properties*.
+1. To use managed identity for Azure Spring Apps apps, add the following properties with these values to *src/main/resources/application.properties*.
- ```properties
- azure.function.uri=https://<your-functionapp-name>.azurewebsites.net
+ ```text
+ azure.function.uri=https://<function-app-name>.azurewebsites.net
azure.function.triggerPath=httptrigger ```
-1. Package your sample app.
+1. Use the following command to package your sample app.
```bash mvn clean package ```
-1. Now deploy the app to Azure with the Azure CLI command `az spring app deploy`.
+1. Use the following command to deploy the app to Azure Spring Apps.
```azurecli az spring app deploy \
- --resource-group "myResourceGroup" \
- --service "mymsispringcloud" \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name "msiapp" \ --jar-path target/asc-managed-identity-function-sample-0.1.0.jar ```
-1. Access the public endpoint or test endpoint to test your app.
+1. Use the following command to access the public endpoint or test endpoint to test your app.
```bash
- curl https://mymsispringcloud-msiapp.azuremicroservices.io/func/springcloud
+ curl https://<Azure-Spring-Apps-instance-name>-msiapp.azuremicroservices.io/func/springcloud
```
- You see the following message returned in the response body.
+ The following message is returned in the response body.
```output Function Response: Hello, springcloud. This HTTP triggered function executed successfully.
This sample invokes the HTTP triggered function by first requesting an access to
## Next steps
-* [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
-* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
-* [Configure client apps to access your App Service](../app-service/configure-authentication-provider-aad.md#configure-client-apps-to-access-your-app-service)
+- [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
+- [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+- [Configure client apps to access your App Service](../app-service/configure-authentication-provider-aad.md#configure-client-apps-to-access-your-app-service)
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
The following methods wrap the [Copy Blob](/rest/api/storageservices/copy-blob)
The `StartCopyFromUri` and `StartCopyFromUriAsync` methods return a [CopyFromUriOperation](/dotnet/api/azure.storage.blobs.models.copyfromurioperation) object containing information about the copy operation. These methods are used when you want asynchronous scheduling for a copy operation.
-## Copy a blob within the same storage account
+## Copy a blob from a source within Azure
-If you're copying a blob within the same storage account, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. The operation can complete synchronously if the copy occurs within the same storage account.
+If you're copying a blob within the same storage account, the operation can complete synchronously. Access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. For an alterative synchronous copy operation, see [Copy a blob from a source object URL with .NET](storage-blob-copy-url-dotnet.md).
-The following example shows a scenario for copying a source blob within the same storage account. This example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
+If the copy source is a blob in a different storage account, the operation can complete asynchronously. The source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
-
-## Copy a blob from another storage account
-
-If the source is a blob in another storage account, the source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
-
-The following example shows a scenario for copying a blob from another storage account. In this example, we create a source blob URI with an appended service SAS token by calling [GenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.generatesasuri) on the blob client. To use this method, the source blob client needs to be authorized via account key.
+The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/CopyBlob.cs" id="Snippet_CopyAcrossAccounts_CopyBlob":::
-If you already have a SAS token, you can construct the URI for the source blob as follows:
-
-```csharp
-// Append the SAS token to the URI - include ? before the SAS token
-var sourceBlobSASURI = new Uri(
- $"https://{srcAccountName}.blob.core.windows.net/{srcContainerName}/{srcBlobName}?{sasToken}");
-```
-
-You can also [create a user delegation SAS token with .NET](storage-blob-user-delegation-sas-create-dotnet.md). User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key.
+> [!NOTE]
+> User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key. To create a user delegation SAS token, the Azure AD security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization).
## Copy a blob from a source outside of Azure
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
The following method wraps the [Copy Blob](/rest/api/storageservices/copy-blob)
The `beginCopy` method returns a [SyncPoller](/java/api/com.azure.core.util.polling.syncpoller) to poll the progress of the copy operation. The poll response type is [BlobCopyInfo](/java/api/com.azure.storage.blob.models.blobcopyinfo). The `beginCopy` method is used when you want asynchronous scheduling for a copy operation.
-## Copy a blob within the same storage account
+## Copy a blob from a source within Azure
-If you're copying a blob within the same storage account, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. The operation can complete synchronously if the copy occurs within the same storage account.
+If you're copying a blob within the same storage account, the operation can complete synchronously. Access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. For an alterative synchronous copy operation, see [Copy a blob from a source object URL with Java](storage-blob-copy-url-java.md).
-The following example shows a scenario for copying a source blob within the same storage account. This example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
+If the copy source is a blob in a different storage account, the operation can complete asynchronously. The source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
-
-## Copy a blob from another storage account
-
-If the source is a blob in another storage account, the source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
-
-The following example shows a scenario for copying a blob from another storage account. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own.
+The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java" id="Snippet_CopyAcrossStorageAccounts_CopyBlob":::
+> [!NOTE]
+> User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key. To create a user delegation SAS token, the Azure AD security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization).
+ ## Copy a blob from a source outside of Azure You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
For large objects, you may choose to work with individual blocks. The following
- [StageBlockFromUri](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.stageblockfromuri) - [StageBlockFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.stageblockfromuriasync)
-## Copy a blob within the same storage account
+## Copy a blob from a source within Azure
-If you're copying a blob within the same storage account, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
+If you're copying a blob from a source within Azure, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
-The following example shows a scenario for copying a source blob within the same storage account. The [SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example. The `overwrite` parameter defaults to false.
+The following example shows a scenario for copying from a source blob within Azure. The [SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example. The `overwrite` parameter defaults to false.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/PutBlobFromURL.cs" id="Snippet_CopyWithinAccount_PutBlobFromURL"::: The [SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync) method can also accept a [BlobSyncUploadFromUriOptions](/dotnet/api/azure.storage.blobs.models.blobsyncuploadfromurioptions) parameter to specify further options for the operation.
-## Copy a blob from another storage account
-
-If the source is a blob in another storage account, the source blob must either be public, or authorized via Azure AD or SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
-
-The following example shows a scenario for copying a blob from another storage account. In this example, we create a source blob URI with an appended *service SAS token* by calling [GenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.generatesasuri) on the blob client. To use this method, the source blob client needs to be authorized via account key.
--
-If you already have a SAS token, you can construct the URI for the source blob as follows:
-
-```csharp
-// Append the SAS token to the URI - include ? before the SAS token
-var sourceBlobSASURI = new Uri(
- $"https://{srcAccountName}.blob.core.windows.net/{srcContainerName}/{srcBlobName}?{sasToken}");
-```
-
-You can also [create a user delegation SAS token with .NET](storage-blob-user-delegation-sas-create-dotnet.md). User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key.
- ## Copy a blob from a source outside of Azure You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
For large objects, you can work with individual blocks. The following method wra
If you're copying a blob from a source within Azure, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
-The following example shows a scenario for copying a source blob within Azure. The [uploadFromUrl](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example.
+The following example shows a scenario for copying from a source blob within Azure. The [uploadFromUrl](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example.
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java" id="Snippet_CopyFromAzure_PutBlobFromURL":::
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
az storage blob upload \
This operation creates the blob if it doesn't already exist, and overwrites it if it does. Upload as many files as you like before continuing.
+When you upload a blob using the Azure CLI, it issues respective [REST API calls](/rest/api/storageservices/blob-service-rest-api) via http and https protocols.
+ To upload multiple files at the same time, you can use the [az storage blob upload-batch](/cli/azure/storage/blob) command. ## List the blobs in a container
storage Videos Azure Files And File Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/videos-azure-files-and-file-sync.md
+
+ Title: Azure Files and File Sync videos
+description: Identify and troubleshoot performance issues in Azure Storage accounts.
+++ Last updated : 04/19/2023++++++
+# Azure Files and Azure File Sync videos
+If you're new to Azure Files and File Sync or looking to deepen your understanding, this article provides a comprehensive list of video content released over time. Note that some videos may lack the latest updates.
+
+### Video list:
+- [Domain join Azure File Share with On-Premise Active Directory & replace your file server with Azure File Share.](https://www.youtube.com/watch?v=jd49W33DxkQ)
+- [How to mount Azure File Share in Windows?](https://www.youtube.com/watch?v=bmRZi9iGsK0)
+- [NFS 4.1 for Azure File Shares](https://www.youtube.com/watch?v=44qVRZg-bMA&list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1&index=10)
+- [How to setup Azure File Sync?](https://www.youtube.com/watch?v=V43p6qIhFkc&list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1&index=13)
+- [Integrating HPC Pack with Azure Files](https://www.youtube.com/watch?v=uStaB09y6TE)
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
It's also possible for a file to be partially tiered or partially recalled. In a
> Size represents the logical size of the file. Size on disk represents the physical size of the file stream that's stored on the disk. ## Low disk space mode
-Disks with server endpoints can run out of space for several reasons, even with cloud tiering enabled. This could result in Azure File Sync not working as expected and even becoming unusable. While it isn't possible for Azure File Sync to prevent these occurrences completely, low disk space mode (new for Azure File Sync agent version 15.1) is designed to avoid a server endpoint reaching this situation.
+Disks that have server endpoints can run out of space due to various reasons, even when cloud tiering is enabled. These reasons include:
+
+- Data being manually copied to the disk outside of the server endpoint path
+- Slow or delayed sync causing files not to be tiered
+- Excessive recalls of tiered files
+
+When the disk space runs out, Azure File Sync might not function correctly and can even become unusable. While it's not possible for Azure File Sync to completely prevent these occurrences, the low disk space mode (available in Azure File Sync agent versions starting from 15.1) is designed to prevent a server endpoint from reaching this situation.
For server endpoints with cloud tiering enabled and volume free space policy set, if the free space on the volume drops below the calculated threshold, then the volume is in low disk space mode.
If a volume has two server endpoints, one with tiering enabled and one without t
### How is the threshold for low disk space mode calculated? Calculate the threshold by taking the minimum of the following three numbers:-- 10% of volume free space in GB -- Volume Free Space Policy in GB-- 20 GB of volume free space
+- 10% of volume free space in GiB
+- Volume Free Space Policy in GiB
+- 20 GiB
The following table includes some examples of how the threshold is calculated and when the volume will be in low disk space mode.
-| Volume Size | Volume Free Space Policy | Current Volume Free Space | Threshold \= Min (10%, Volume Free Space Policy, 20 GB) | Is Low Disk Space Mode? | Reason |
-| -- | | - | -- | -- | |
-| 100 GB | 7% (7 GB) | 9% (9 GB) | 7GB = Min (10 GB, 7 GB, 20 GB) | No | Current Volume Free Space > Threshold |
-| 100 GB | 7% (7 GB) | 5% (5 GB) | 7GB = Min (10 GB, 7 GB, 20 GB) | Yes | Current Volume Free Space < Threshold |
-| 300 GB | 8% (24 GB) | 7% (21 GB) | 20GB = Min (30 GB, 24 GB, 20 GB) | No | Current Volume Free Space > Threshold |
-| 300 GB | 8% (24 GB) | 6% (18 GB) | 20GB = Min (30 GB, 24 GB, 20 GB) | Yes | Current Volume Free Space < Threshold |
+| Volume Size | 10% of Volume Size | Volume Free Space Policy | Threshold = Min(10% of Volume Size, Volume Free Space Policy, 20GB) | Current Volume Free Space | Is Low Disk Space Mode? | Reason |
+|-|--|--|-||-||
+| 100 GiB | 10 GiB | 7% (7 GiB) | 7 GiB = Min (10 GiB, 7 GiB, 20 GiB) | 9% (9 GiB) | No | Current Volume Free Space > Threshold |
+| 100 GiB | 10 GiB | 7% (7 GiB) | 7 GiB = Min (10 GiB, 7 GiB, 20 GiB) | 5% (5 GiB) | Yes | Current Volume Free Space < Threshold |
+| 300 GiB | 30 GiB | 8% (24 GiB) | 20 GiB = Min (30 GiB, 24 GiB, 20 GiB) | 7% (21 GiB) | No | Current Volume Free Space > Threshold |
+| 300 GiB | 30 GiB | 8% (24 GiB) | 20 GiB = Min (30 GiB, 24 GiB, 20 GiB) | 6% (18 GiB) | Yes | Current Volume Free Space < Threshold |
### How does low disk space mode work with volume free space policy? Low disk space mode always respects the volume free space policy. The threshold calculation is designed to make sure volume free space policy set by the user is respected.
+### What is the most common cause for server endpoint being in low disk mode?
+The primary cause of low disk mode is copying or moving large amounts of data to the disk where a tiering-enabled server endpoint is located.
+ ### How to get out of low disk space mode?
-Low disk space mode is designed to revert to normal behavior when volume free space is above the threshold. You can help speed up the process by looking for any recently created files outside the server endpoint location and moving them to a different disk.
+Here are two ways to exit low disk mode on the server endpoint:
+
+1. Low disk mode will automatically switch to normal behavior by not persisting recalls and tiering files more frequently, without requiring any intervention.
+2. You can manually speed up the process by increasing the volume size or freeing up space outside the server endpoint.
### How to check if a server is in Low Disk Space mode? Event ID 19000 is logged to the Telemetry event log every minute for each server endpoint. Use this event to determine if the server endpoint is in low disk mode (IsLowDiskMode = true). The Telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
description: Troubleshoot common issues with cloud tiering in an Azure File Sync
Previously updated : 4/12/2023 Last updated : 4/21/2023
If files fail to tier to Azure Files:
| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. | | 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database isn't running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_<br>CUSTOM_EXCLUSION_LIST | The file can't be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE<br>\SOFTWARE\Microsoft\Azure\StorageSync |
+| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_<br>CUSTOM_EXCLUSION_LIST | The file can't be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE<br>\Microsoft\Azure\StorageSync |
| 0x80C86050 | -2134351792 | ECS_E_REPLICA_NOT_READY_FOR_<br>TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
+| 0x80c8304e | -2134364082 | ECS_E_WORK_FRAMEWORK_ACTION_<br>RETRY_NOT_SUPPORTED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c8309c | -2134364004 | ECS_E_CREATE_SV_BATCHED_CHANGE_<br>DETECTION_FAILED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x8000ffff | -2147418113 | E_UNEXPECTED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c80220 | -2134375904 | ECS_E_SYNC_METADATA_IO_ERROR | The sync database has encountered an IO error. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c830a7 | -2134363993 | ECS_E_AZURE_FILE_SNAPSHOT_LIMIT_<br>REACHED | The Azure file snapshot limit has been reached. | Upgrade the Azure File Sync agent to the latest version. After upgrading the agent, run the `DeepScrubbingScheduledTask` located under \Microsoft\StorageSync. |
+| 0x80c80367 | -2134375577 | ECS_E_FILE_SNAPSHOT_OPERATION_<br>EXECUTION_MAX_ATTEMPTS_REACHED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c8306f | -2134364049 | ECS_E_ETAG_MISMATCH | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c8304c | -2134364084 | ECS_E_ASYNC_POLLING_TIMEOUT | Timeout error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x299 | N/A | ERROR_FILE_SYSTEM_LIMITATION | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c83054 | -2134364076 | ECS_E_CREATE_SV_UNKNOWN_<br>GLOBAL_ID | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c8309b | -2134364005 | ECS_E_CREATE_SV_PER_ITEM_CHANGE_<br>DETECTION_FAILED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c83034 | -2134364108 | ECS_E_FORBIDDEN | Access is denied. | Please check the access policies on the storage account, and also check your proxy settings. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x34 | 52 | ERROR_DUP_NAME | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x1128 | 4392 | ERROR_INVALID_REPARSE_DATA | The data is corrupted and unreadable. | Run chkdsk on the volume. [Learn more](/windows-server/administration/windows-commands/chkdsk?tabs=event-viewer). |
+| 0x8e5e0450 | -1906441136 | JET_errInvalidSesid | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80092004 | -2146885628 | CRYPT_E_NOT_FOUND | Certificate required for Azure File Sync authentication is missing. | Run this PowerShell command on the server to reset the certificate `Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>` |
+| 0x80c80020 | -2134376416 | ECS_E_CLUSTER_NOT_RUNNING | The Failover Cluster service is not running. | Verify the cluster service (clussvc) is running. [Learn more](/troubleshoot/windows-server/high-availability/troubleshoot-cluster-service-fails-to-start). |
+| 0x80c83036 | -2134364106 | ECS_E_NOT_FOUND | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x801f0005 | -2145452027 | ERROR_FLT_INVALID_NAME_REQUEST | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x1126 | 4390 | ERROR_NOT_A_REPARSE_POINT | An internal error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x718 | N/A | ERROR_NOT_ENOUGH_QUOTA | Not enough server memory resources available to process this command. | Monitor memory usage on your server. [Learn more](file-sync-planning.md#recommended-system-resources). |
+| 0x46a | N/A | ERROR_NOT_ENOUGH_SERVER_MEMORY | Not enough server memory resources available to process this command. | Monitor memory usage on your server. [Learn more](file-sync-planning.md#recommended-system-resources). |
+| 0x80070026 | -2147024858 | COR_E_ENDOFSTREAM | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80131501 | -2146233087 | COR_E_SYSTEM | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c86040 | -2134351808 | ECS_E_AZURE_FILE_SHARE_INVALID_<br>HEADER | An unexpected error occurred. | If the error persists for more than a day, create a support request. |
+| 0x80c80339 | -2134375623 | ECS_E_CERT_DATE_INVALID | The server's SSL certificate is expired. | Check with your organization's tech support to get help. If you need further investigation, create a support request. |
+| 0x80c80337 | -2134375625 | ECS_E_INVALID_CA | The server's SSL certificate was issued by a certificate authority that isn't trusted by this PC. | Check with your organization's tech support to get help. If you need further investigation, create a support request. |
+| 0x80c80001 | -2134376447 | ECS_E_SYNC_INVALID_PROTOCOL_FORMAT | A connection with the service could not be established. | Please check and configure the proxy setting correctly or remove the proxy setting. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x6d9 | N/A | EPT_S_NOT_REGISTERED | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x35 | 53 | ERROR_BAD_NETPATH | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x571 | N/A | ERROR_DISK_CORRUPT | The disk structure is corrupted and unreadable. | Run chkdsk on the volume. [Learn more](/windows-server/administration/windows-commands/chkdsk?tabs=event-viewer). |
+| 0x52e | N/A | ERROR_LOGON_FAILURE | Operation failed due to an authentication failure. | If the error persists for more than a day, create a support request. |
+| 0x8002802b | -2147319765 | TYPE_E_ELEMENTNOTFOUND | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80072f00 | -2147012864 | WININET_E_FORCE_RETRY | A connection with the service could not be established. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
## How to troubleshoot files that fail to be recalled If files fail to be recalled:
If files fail to be recalled:
| 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. | | 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you're certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](file-sync-troubleshoot-sync-errors.md#-2146762487) to resolve this issue. | | 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
+| 0x32 | 50 | ERROR_NOT_SUPPORTED | An internal error occurred. | Please upgrade to the latest Azure File Sync agent version. If the error persists after upgrading the agent, create a support request. |
+| 0x6 | N/A | ERROR_INVALID_HANDLE | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x80c80310 | -2134375664 | ECS_E_INVALID_DOWNLOAD_RESPONSE | Azure File sync error. | If the error persists for more than a day, create a support request. |
+| 0x45d | N/A | ERROR_IO_DEVICE | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x80c8604b | -2134351797 | ECS_E_AZURE_FILE_SHARE_FILE_NOT_FOUND | File not found in the file share. | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. |
+| 0x21 | 33 | ERROR_LOCK_VIOLATION | The process cannot access the file because another process has locked a portion of the file. | No action required. Once the application closes the handle to the file, recall should succeed. |
+| 0x80c8604c | -2134351796 | ECS_E_AZURE_FILE_SNAPSHOT_NOT_FOUND_<br>SYNC_PENDING | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. Recall should succeed after the sync session completes. |
+| 0x80c80312 | -2134375662 | ECS_E_DOWNLOAD_SESSION_STREAM_INTERRUPTED | Couldn't finish downloading files. Sync will try again later. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x80c8600c | -2134351860 | ECS_E_AZURE_INTERNAL_ERROR | The server encountered an internal error. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80c8600b | -2134351861 | ECS_E_AZURE_INVALID_RANGE | The server encountered an internal error. | No action required. If the error persists for more than a day, create a support request. |
+| 0x45b | N/A | ERROR_SHUTDOWN_IN_PROGRESS | A system shutdown is in progress. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80072efd | -2147012867 | WININET_E_CANNOT_CONNECT | A connection with the service could not be established. | Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x800703ee | -2147023890 | ERROR_FILE_INVALID | The volume for a file has been externally altered so that the opened file is no longer valid. | If the error persists for more than a day, create a support request. |
+| 0x80c86048 | -2134351800 | ECS_E_AZURE_FILE_SNAPSHOT_NOT_FOUND | An internal error occurred. | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. |
+| 0x80072f78 | -2147012744 | WININET_E_INVALID_SERVER_RESPONSE | A connection with the service could not be established. | Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x8007139f | -2147019873 | ERROR_INVALID_STATE | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x570 | N/A | ERROR_FILE_CORRUPT | The file or directory is corrupted and unreadable. | Run chkdsk on the volume. [Learn more](/windows-server/administration/windows-commands/chkdsk?tabs=event-viewer). |
+| 0x5ad | N/A | ERROR_WORKING_SET_QUOTA | Insufficient quota to complete the requested service. | Monitor memory usage on your server. If the error persists for more than a day, create a support request. |
+| 0x8 | N/A | ERROR_NOT_ENOUGH_MEMORY | Not enough memory resources are available to process this command. | Monitor memory usage on your server. If the error persists for more than a day, create a support request. |
+| 0x80c80072 | -2134376334 | ECS_E_BAD_GATEWAY | A connection with the service could not be established. | Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x80190193 | -2145844845 | HTTP_E_STATUS_FORBIDDEN | Forbidden (403) error occurred. | Update Azure file share access policy. [Learn more](../../role-based-access-control/built-in-roles.md). |
+| 0x80c8604e | -2134351794 | ECS_E_AZURE_FILE_SNAPSHOT_NOT_FOUND_ON_<br>CONFLICT_FILE | Unable to recall sync conflict loser file from Azure file share. | If this error is happening for a tiered file that is a sync conflict file, this file might not be needed by end users anymore. If the original file is available and valid, you may remove this file from the server endpoint. |
+| 0x80c80075 | -2134376331 | ECS_E_ACCESS_TOKEN_CATASTROPHIC_FAILURE | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80c8005b | -2134376357 | ECS_E_AZURE_FILE_SERVICE_UNAVAILABLE | The Azure File Service is currently unavailable. | If the error persists for more than a day, create a support request. |
+| 0x80c83099 | -2134364007 | ECS_E_PRIVATE_ENDPOINT_ACCESS_BLOCKED | Private endpoint configuration access blocked. | Check the private endpoint configuration and allow access to the Azure File Sync service. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x80c86000 | -2134351872 | ECS_E_AZURE_AUTHENTICATION_FAILED | Server failed to authenticate the request. | Check the network configuration and make sure the storage account accepts the server IP address. You can do this by adding the server IP, adding the server's IP subnet, or adding the server vnet to the authorized access control list to access the storage account. [Learn more](file-sync-deployment-guide.md#optional-configure-firewall-and-virtual-network-settings). |
+| 0x2ef1 | 12017 | ERROR_WINHTTP_OPERATION_CANCELLED | A connection with the service could not be established. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x80c80338 | -2134375624 | ECS_E_CERT_CN_INVALID | The server's SSL certificate contains incorrect hostnames. The certificate can't be used to establish the SSL connection. | Check with your organization's tech support to get help. If you need further investigation, create a support request. |
+| 0x80c8000c | -2134376436 | ECS_E_SYNC_UNKNOWN_URI | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80c8033a | -2134375622 | ECS_E_SECURITY_CHANNEL_ERROR | There was a problem validating the server's SSL certificate. | Check with your organization's tech support to get help. If you need further investigation, create a support request. |
+| 0x80131509 | -2146233079 | COR_E_INVALIDOPERATION | An unexpected error occurred. | If the error persists for more than a day, create a support request. |
+| 0x80c8603d | -2134351811 | ECS_E_AZURE_UNKNOWN_FAILURE | An unexpected error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80c8033f | -2134375617 | ECS_E_TOKEN_LIFETIME_IS_TOO_LONG | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80190190 | -2145844848 | HTTP_E_STATUS_BAD_REQUEST | A connection with the service could not be established. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80c86036 | -2134351818 | ECS_E_AZURE_FILE_PARENT_NOT_FOUND | The specified parent path for the file does not exist | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. |
+| 0x80c86049 | -2134351799 | ECS_E_AZURE_SHARE_SNAPSHOT_FILE_NOT_FOUND | File not found in the share snapshot. | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. |
+| 0x80c80311 | -2134375663 | ECS_E_DOWNLOAD_SESSION_HASH_CONFLICT | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x800700a4 | -2147024732 | ERROR_MAX_THRDS_REACHED | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x80070147 | -2147024569 | ERROR_OFFSET_ALIGNMENT_VIOLATION | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x80090321 | -2146893023 | SEC_E_BUFFER_TOO_SMALL | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x801901a0 | -2145844832 | HTTP_E_STATUS_RANGE_NOT_SATISFIABLE | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x80c80066 | -2134376346 | ECS_E_CLUSTER_ID_MISMATCH | There is a mismatch between the cluster ID returned from cluster API and the cluster ID saved during the registration. | Please create a support request for further investigation of the issue. |
+| 0x80c8032d | -2134375635 | ECS_E_PROXY_AUTH_REQUIRED | The proxy server used to access the internet needs your current credentials. | If your proxy requires authentication, update the proxy credentials. [Learn more](file-sync-firewall-and-proxy.md#proxy). |
+| 0x7a | 122 | ERROR_INSUFFICIENT_BUFFER | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. |
+| 0x8019012e | -2145844946 | HTTP_E_STATUS_REDIRECT | Azure File Sync does not support HTTP redirection. | Disable HTTP redirect on your proxy server or network device. |
+| 0x6be | N/A | RPC_S_CALL_FAILED | An unknown error occurred. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
+| 0x2747 | 10055 | WSAENOBUFS | An internal error occurred. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). |
## Tiered files are not accessible on the server after deleting a server endpoint Tiered files on a server will become inaccessible if the files aren't recalled prior to deleting a server endpoint.
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 03/28/2023 Last updated : 04/18/2023
After enabling Azure AD Kerberos authentication, you'll need to explicitly grant
4. Select the application with the name matching **[Storage Account] `<your-storage-account-name>`.file.core.windows.net**. 5. Select **API permissions** in the left pane.
-6. Select **Grant admin consent**.
+6. Select **Grant admin consent for [Directory Name]** to grant consent for the three requested API permissions (openid, profile, and User.Read) for all accounts in the directory.
7. Select **Yes** to confirm. > [!IMPORTANT]
storage Isv File Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md
This article compares several ISV solutions that provide files services in Azure
- Byte Level Locking (multiple simultaneous R/W opens) **Qumulo**-- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)
+- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp)
- Support for REST, and FTP **Tiger Technology**
synapse-analytics Apache Spark Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-sql-connector.md
jdbc_df = spark.read \
.load() ```
+> [!IMPORTANT]
+> - A required dependency must be installed in order to authenticate using Active Directory.
+> - The format of `user` when using ActiveDirectoryPassword should be the UPN format, for example `username@domainname.com`.
+> - For **Scala**, the `com.microsoft.aad.adal4j` artifact will need to be installed.
+> - For **Python**, the `adal` library will need to be installed. This is available via pip.
+> - Check the [sample notebooks](https://github.com/microsoft/sql-spark-connector/tree/master/samples) for examples and for latest drivers and versions, visit [Apache Spark connector: SQL Server & Azure SQL](/sql/connect/spark/connector).
+
+## Support
+
+The Apache Spark Connector for Azure SQL and SQL Server is an open-source project. This connector does not come with any Microsoft support. For issues with or questions about the connector, create an Issue in this project repository. The connector community is active and monitoring submissions.
+ ## Next steps - [Learn more about the SQL Server and Azure SQL connector](/sql/connect/spark/connector)
+- Visit the [SQL Spark connector GitHub repository](https://github.com/microsoft/sql-spark-connector).
- [View Azure Data SQL Samples](https://github.com/microsoft/sql-server-samples)
synapse-analytics Low Shuffle Merge For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/low-shuffle-merge-for-apache-spark.md
+
+ Title: Low Shuffle Merge optimization on Delta tables
+description: Low Shuffle Merge optimization on Delta tables for Apache Spark
++++ Last updated : 04/11/2023++++
+# Low Shuffle Merge optimization on Delta tables
+
+Delta Lake [MERGE command](https://docs.delta.io/latest/delta-update.html#upsert-into-a-table-using-merge) allows users to update a delta table with advanced conditions. It can update data from a source table, view or DataFrame into a target table by using MERGE command. However, the current algorithm isn't fully optimized for handling *unmodified* rows. With Low Shuffle Merge optimization, unmodified rows are excluded from an expensive shuffling operation that is needed for updating matched rows.
+
+## Why we need Low Shuffle Merge
+
+Currently MERGE operation is done by two Join executions. The first join is using the whole target table and source data, to find a list of *touched* files of the target table including any matched rows. After that, it performs the second join reading only those *touched* files and source data, to do actual table update. Even though the first join is to reduce the amount of data for the second join, there could still be a huge number of *unmodified* rows in *touched* files. The first join query is lighter as it only reads columns in the given matching condition. The second one for table update needs to load all columns, which incurs an expensive shuffling process.
+
+With Low Shuffle Merge optimization, Delta keeps the matched row result from the first join temporarily and utilizes it for the second join. Based on the result, it excludes *unmodified* rows from the heavy shuffling process. There would be two separate write jobs for *matched* rows and *unmodified* rows, thus it could result in 2x number of output files compared to the previous behavior. However, the expected performance gain outweighs the possible small files problem.
+
+## Availability
+
+> [!NOTE]
+> - Low Shuffle Merge is available as a Preview feature.
+
+It's available on Synapse Pools for Apache Spark versions 3.2 and 3.3.
+
+|Version| Availability | Default |
+|--|--|--|
+| Delta 0.6 / Spark 2.4 | No | - |
+| Delta 1.2 / Spark 3.2 | Yes | false |
+| Delta 2.2 / Spark 3.3 | Yes | true |
++
+## Benefits of Low Shuffle Merge
+
+* Unmodified rows in *touched* files are handled separately and not going through the actual MERGE operation. It can save the overall MERGE execution time and compute resources. The gain would be larger when many rows are copied and only a few rows are updated.
+* Row orderings are preserved for unmodified rows. Therefore, the output files of unmodified rows could be still efficient for data skipping if the file was sorted or Z-ORDERED.
+* There would be tiny overhead even for the worst case when MERGE condition matches all rows in touched files.
++
+## How to enable and disable Low Shuffle Merge
+
+Once the configuration is set for the pool or session, all Spark write patterns will use the functionality.
+
+To use Low Shuffle Merge optimization, enable it using the following configuration:
+
+1. Scala and PySpark
+
+```scala
+spark.conf.set("spark.microsoft.delta.merge.lowShuffle.enabled", "true")
+```
+
+2. Spark SQL
+
+```SQL
+SET `spark.microsoft.delta.merge.lowShuffle.enabled` = true
+```
+
+To check the current configuration value, use the command as shown below:
+
+1. Scala and PySpark
+
+```scala
+spark.conf.get("spark.microsoft.delta.merge.lowShuffle.enabled")
+```
+
+2. Spark SQL
+
+```SQL
+SET `spark.microsoft.delta.merge.lowShuffle.enabled`
+```
+
+To disable the feature, change the following configuration as shown below:
+
+1. Scala and PySpark
+
+```scala
+spark.conf.set("spark.microsoft.delta.merge.lowShuffle.enabled", "false")
+```
+
+2. Spark SQL
+
+```SQL
+SET `spark.microsoft.delta.merge.lowShuffle.enabled` = false
+```
synapse-analytics Optimize Write For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/optimize-write-for-apache-spark.md
Optimize Write is a Delta Lake on Synapse feature that reduces the number of fil
This feature achieves the file size by using an extra data shuffle phase over partitions, causing an extra processing cost while writing the data. The small write penalty should be outweighed by read efficiency on the tables. > [!NOTE]
-> - Optimize write is available as a Preview feature.
-> - It is available on Synapse Pools for Apache Spark versions 3.1 and 3.2.
+> - It is available on Synapse Pools for Apache Spark versions above 3.1.
## Benefits of Optimize Writes
This feature achieves the file size by using an extra data shuffle phase over pa
## How to enable and disable the optimize write feature
-The optimize write feature is disabled by default.
+The optimize write feature is disabled by default. In Spark 3.3 Pool, it is enabled by default for partitioned tables.
Once the configuration is set for the pool or session, all Spark write patterns will use the functionality.
SET `spark.microsoft.delta.optimizeWrite.binSize` = 134217728
- [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md). - [Run a Spark application in notebook](./apache-spark-development-using-notebooks.md). - [Create Apache Spark job definition in Azure Studio](./apache-spark-job-definitions.md).
-
+
virtual-desktop App Attach Msixmgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-msixmgr.md
msixmgr.exe -Unpack -packagePath "C:\Users\ssa\Desktop\packageName_3.51.1.0_x64_
## Next steps
-Learn more about MSIX app attach at [What is MSIX app attach?](what-is-app-attach.md)
+To learn about the MSIXMGR tool, check out these articles:
-To learn how to set up app attach, check out these articles:
+- [MSIXMGR tool parameters](msixmgr-tool-syntax-description.md)
+- [What's new in the MSIXMGR tool](whats-new-msixmgr.md)
+To learn about MSIX app attach, check out these articles:
+
+- [What is MSIX app attach?](what-is-app-attach.md)
- [Set up MSIX app attach with the Azure portal](app-attach-azure-portal.md) - [Set up MSIX app attach using PowerShell](app-attach-powershell.md) - [Create PowerShell scripts for MSIX app attach](app-attach.md)
virtual-desktop Msixmgr Tool Syntax Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/msixmgr-tool-syntax-description.md
Before you can follow the instructions in this article, you'll need to do the fo
- [Download the MSIXMGR tool](https://aka.ms/msixmgr) - Get an MSIX-packaged application (.MSIX file) - Get administrative permissions on the machine where you'll create the MSIX image -- [Set up MSIXMGR tool](/azure/virtual-desktop/app-attach-msixmgr)
+- [Set up MSIXMGR tool](app-attach-msixmgr.md)
## Parameters
msixmgr.exe -?
To learn more about MSIX app attach, check out these articles: -- [What is MSIX app attach?](/azure/virtual-desktop/what-is-app-attach)-- [Set up MSIX app attach with the Azure portal](/azure/virtual-desktop/app-attach-azure-portal)-- [Set up MSIX app attach using PowerShell](/azure/virtual-desktop/app-attach-powershell)-- [Create PowerShell scripts for MSIX app attach](/azure/virtual-desktop/app-attach)-- [Prepare an MSIX image for Azure Virtual Desktop](/azure/virtual-desktop/app-attach-image-prep)-- [Set up a file share for MSIX app attach](/azure/virtual-desktop/app-attach-file-share)
+- [Using the MSIXMGR tool](app-attach-msixmgr.md)
+- [What's new in the MSIXMGR tool](whats-new-msixmgr.md)
+- [What is MSIX app attach?](what-is-app-attach.md)
+- [Set up MSIX app attach with the Azure portal](app-attach-azure-portal.md)
+- [Set up MSIX app attach using PowerShell](app-attach-powershell.md)
+- [Create PowerShell scripts for MSIX app attach](app-attach.md)
+- [Prepare an MSIX image for Azure Virtual Desktop](app-attach-image-prep.md)
+- [Set up a file share for MSIX app attach](app-attach-file-share.md)
-If you have questions about MSIX app attach, see our [App attach FAQ](/azure/virtual-desktop/app-attach-faq) and [App attach glossary](/azure/virtual-desktop/app-attach-glossary).
+If you have questions about MSIX app attach, see our [App attach FAQ](app-attach-faq.yml) and [App attach glossary](app-attach-glossary.md).
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
Title: Troubleshoot Azure Virtual Desktop Agent Issues - Azure
description: How to resolve common Azure Virtual Desktop Agent and connectivity issues. Previously updated : 04/19/2023 Last updated : 04/21/2023
The Azure Virtual Desktop Agent can cause connection issues because of multiple
- Problems with updates. - Issues with installing during the agent installation, which disrupts connection to the session host.
-This article will guide you through solutions to these common scenarios and how to address connection issues.
+This article guides you through solutions to these common scenarios and how to address connection issues.
> [!NOTE] > For troubleshooting issues related to session connectivity and the Azure Virtual Desktop agent, we recommend you review the event logs on your session host virtual machines (VMs) by going to **Event Viewer** > **Windows Logs** > **Application**. Look for events that have one of the following sources to identify your issue:
This article will guide you through solutions to these common scenarios and how
## Error: The RDAgentBootLoader and/or Remote Desktop Agent Loader has stopped running
-If you're seeing any of the following issues, this means that the boot loader, which loads the agent, was unable to install the agent properly and the agent service isn't running on your session host VM:
+If you're seeing any of the following issues, it means that the boot loader, which loads the agent, was unable to install the agent properly and the agent service isn't running on your session host VM:
- **RDAgentBootLoader** is either stopped or not running. - There's no status for **Remote Desktop Agent Loader**.
If you're seeing any of the following issues, this means that the boot loader, w
To resolve this issue, start the RDAgent boot loader: 1. In the Services window, right-click **Remote Desktop Agent Loader**.
-1. Select **Start**. If this option is greyed out for you, you don't have administrator permissions and will need to get them to start the service.
+
+1. Select **Start**. If this option is greyed out for you, you don't have administrator permissions. You need to get those permissions in order to start the service.
+ 1. Wait 10 seconds, then right-click **Remote Desktop Agent Loader**.+ 1. Select **Refresh**.+ 1. If the service stops after you started and refreshed it, you may have a registration failure. For more information, see [INVALID_REGISTRATION_TOKEN](#error-invalid_registration_token). ## Error: INVALID_REGISTRATION_TOKEN
On your session host VM, go to **Event Viewer** > **Windows Logs** > **Applicati
To resolve this issue, create a valid registration token: 1. To create a new registration token, follow the steps in the [Generate a new registration key for the VM](#step-3-generate-a-new-registration-key-for-the-vm) section.+ 1. Open Registry Editor. + 1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.+ 1. Select **IsRegistered**. + 1. In the **Value data:** entry box, type **0** and select **Ok**. + 1. Select **RegistrationToken**. + 1. In the **Value data:** entry box, paste the registration token from step 1. > [!div class="mx-imgBorder"]
To resolve this issue, create a valid registration token:
``` 1. Go back to Registry Editor.+ 1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.
-1. Verify that **IsRegistered** is set to 1 and there is nothing in the data column for **RegistrationToken**.
+
+1. Verify that **IsRegistered** is set to 1 and there's nothing in the data column for **RegistrationToken**.
> [!div class="mx-imgBorder"] > ![Screenshot of IsRegistered 1](media/isregistered-registry.png) ## Error: Agent cannot connect to broker with INVALID_FORM
-On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **INVALID_FORM** in the description, the agent can't connect to the broker or reach a particular endpoint. This may be because of certain firewall or DNS settings.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **INVALID_FORM** in the description, the agent can't connect to the broker or reach a particular endpoint. This issue may be because of certain firewall or DNS settings.
To resolve this issue, check that you can reach the two endpoints referred to as *BrokerURI* and *BrokerURIGlobal*:
-1. Open Registry Editor.
+1. Open Registry Editor.
+ 1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**. + 1. Make note of the values for **BrokerURI** and **BrokerURIGlobal**. > [!div class="mx-imgBorder"] > ![Screenshot of broker uri and broker uri global](media/broker-uri.png) 1. Open a web browser and enter your value for *BrokerURI* in the address bar and add */api/health* to the end, for example `https://rdbroker-g-us-r0.wvd.microsoft.com/api/health`.+ 1. Open another tab in the browser and enter your value for *BrokerURIGlobal* in the address bar and add */api/health* to the end, for example `https://rdbroker.wvd.microsoft.com/api/health`.
-1. If your network isn't blocking the connection to the broker, both pages will load successfully and will show a message stating **RD Broker is Healthy**, as shown in the following screenshots:
+
+1. If your network isn't blocking the connection to the broker, both pages should load successfully and show a message stating **RD Broker is Healthy**, as shown in the following screenshots:
> [!div class="mx-imgBorder"] > ![Screenshot of successfully loaded broker uri access](media/broker-uri-web.png)
To resolve this issue, check that you can reach the two endpoints referred to as
> [!div class="mx-imgBorder"] > ![Screenshot of successfully loaded broker global uri access](media/broker-global.png)
-1. If the network is blocking broker connection, the pages will not load, as shown in the following screenshot.
+1. If the network is blocking broker connection, the pages won't load, as shown in the following screenshot.
> [!div class="mx-imgBorder"] > ![Screenshot of unsuccessful loaded broker access](media/unsuccessful-broker-uri.png)
To resolve this issue, check that you can reach the two endpoints referred to as
> [!div class="mx-imgBorder"] > ![Screenshot of unsuccessful loaded broker global access](media/unsuccessful-broker-global.png)
- You will need to unblock the required endpoints and then repeat steps 4 to 7. For more information, see [Required URL List](safe-url-list.md).
+ You must unblock the required endpoints and then repeat steps 4 to 7. For more information, see [Required URL List](safe-url-list.md).
-1. If this does not resolve your issue, make sure that you do not have any group policies with ciphers that block the agent to broker connection. Azure Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/concept-end-to-end-tls.md#supported-cipher-suites). For more information, see [Connection Security](network-connectivity.md#connection-security).
+1. If following the previous steps doesn't resolve your issue, make sure that you don't have any group policies with ciphers that block the agent to broker connection. Azure Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/concept-end-to-end-tls.md#supported-cipher-suites). For more information, see [Connection Security](network-connectivity.md#connection-security).
## Error: 3703 On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3703 with **RD Gateway Url: is not accessible** in the description, the agent is unable to reach the gateway URLs. To successfully connect to your session host, you must allow network traffic to the URLs from the [Required URL List](safe-url-list.md). Also, make sure your firewall or proxy settings don't block these URLs. Unblocking these URLs is required to use Azure Virtual Desktop.
-To resolve this issue, verify access these to the required URLs by running the [Required URL Check tool](required-url-check-tool.md). If you're using Azure Firewall, see [Use Azure Firewall to protect Azure Virtual Desktop deployments.](../firewall/protect-azure-virtual-desktop.md) and [Azure Firewall DNS settings](../firewall/dns-settings.md) for more information on how to configure it for Azure Virtual Desktop.
+To resolve this issue, verify whether you can access the required URLs by running the [Required URL Check tool](required-url-check-tool.md). If you're using Azure Firewall, see [Use Azure Firewall to protect Azure Virtual Desktop deployments.](../firewall/protect-azure-virtual-desktop.md) and [Azure Firewall DNS settings](../firewall/dns-settings.md) for more information on how to configure it for Azure Virtual Desktop.
## Error: 3019
-On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3019, this means the agent can't reach the web socket transport URLs. To successfully connect to your session host and allow network traffic to bypass these restrictions, you must unblock the URLs listed in the [Required URL list](safe-url-list.md). Work with your networking team to make sure your firewall, proxy, and DNS settings aren't blocking these URLs. You can also check your network trace logs to identify where the Azure Virtual Desktop service is being blocked. If you open a Microsoft Support case for this particular issue, make sure to attach your network trace logs to the request.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3019, then the agent can't reach the web socket transport URLs. To successfully connect to your session host and allow network traffic to bypass these restrictions, you must unblock the URLs listed in the [Required URL list](safe-url-list.md). Work with your networking team to make sure your firewall, proxy, and DNS settings aren't blocking these URLs. You can also check your network trace logs to identify where the Azure Virtual Desktop service is being blocked. If you open a Microsoft Support case for this particular issue, make sure to attach your network trace logs to the request.
## Error: InstallationHealthCheckFailedException
-On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **InstallationHealthCheckFailedException** in the description, this means the stack listener isn't working because the terminal server has toggled the registry key for the stack listener.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **InstallationHealthCheckFailedException** in the description, then the stack listener isn't working because the terminal server has toggled the registry key for the stack listener.
To resolve this issue:+ 1. Check to see if [the stack listener is working](#error-stack-listener-isnt-working-on-a-windows-10-2004-session-host-vm)+ 1. If the stack listener isn't working, [manually uninstall and reinstall the stack component](#error-session-host-vms-are-stuck-in-upgrading-state). ## Error: ENDPOINT_NOT_FOUND
-On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **ENDPOINT_NOT_FOUND** in the description, this means the broker couldn't find an endpoint to establish a connection with. This connection issue can happen for one of the following reasons:
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **ENDPOINT_NOT_FOUND** in the description, then the broker couldn't find an endpoint to establish a connection with. This connection issue can happen for one of the following reasons:
- There aren't any session host VMs in your host pool. - The session host VMs in your host pool aren't active.
On your session host VM, go to **Event Viewer** > **Windows Logs** > **Applicati
To resolve this issue: 1. Make sure the VM is powered on and hasn't been removed from the host pool.+ 1. Make sure that the VM hasn't exceeded the max session limit.+ 1. Make sure the [agent service is running](#error-the-rdagentbootloader-andor-remote-desktop-agent-loader-has-stopped-running) and the [stack listener is working](#error-stack-listener-isnt-working-on-a-windows-10-2004-session-host-vm).+ 1. Make sure [the agent can connect to the broker](#error-agent-cannot-connect-to-broker-with-invalid_form).+ 1. Make sure [your VM has a valid registration token](#error-invalid_registration_token).+ 1. Make sure [the VM registration token hasn't expired](./faq.yml). ## Error: InstallMsiException
On your session host VM, go to **Event Viewer** > **Windows Logs** > **Applicati
To check whether group policy is blocking `msiexec.exe` from running: 1. Open Resultant Set of Policy by running **rsop.msc** from an elevated command prompt.
-1. In the **Resultant Set of Policy** window that pops up, go to **Computer Configuration > Administrative Templates > Windows Components > Windows Installer > Turn off Windows Installer**. If the state is **Enabled**, work with your Active Directory team to allow `msiexec.exe` to run.
+
+1. In the **Resultant Set of Policy** window that pops up, go to **Computer Configuration > Administrative Templates** > **Windows Components** > **Windows Installer** > **Turn off Windows Installer**. If the state is **Enabled**, work with your Active Directory team to allow `msiexec.exe` to run.
> [!div class="mx-imgBorder"] > ![Screenshot of Windows Installer policy in Resultant Set of Policy](media/gpo-policy.png) > [!NOTE]
- > This isn't a comprehensive list of policies, just the one we're currently aware of.
+ > This list isn't a comprehensive list of policies, just the ones we're currently aware of.
## Error: Win32Exception On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **InstallMsiException** in the description, a policy is blocking `cmd.exe` from launching. Blocking this program prevents you from running the console window, which is what you need to use to restart the service whenever the agent updates. 1. Open Resultant Set of Policy by running **rsop.msc** from an elevated command prompt.+ 1. In the **Resultant Set of Policy** window that pops up, go to **User Configuration > Administrative Templates > System > Prevent access to the command prompt**. If the state is **Enabled**, work with your Active Directory team to allow `cmd.exe` to run. ## Error: Stack listener isn't working on a Windows 10 2004 session host VM
-On your session host VM, from a command prompt run `qwinsta.exe` and make note of the version number that appears next to **rdp-sxs** in the *SESSIONNAME* column. If the *STATE* column for **rdp-tcp** and **rdp-sxs** entries isn't **Listen**, or if **rdp-tcp** and **rdp-sxs** entries aren't listed at all, it means that there's a stack issue. Stack updates get installed along with agent updates, but if this hasn't been successful, the Azure Virtual Desktop Listener won't work.
+On your session host VM, from a command prompt run `qwinsta.exe` and make note of the version number that appears next to **rdp-sxs** in the *SESSIONNAME* column. If the *STATE* column for **rdp-tcp** and **rdp-sxs** entries isn't **Listen**, or if **rdp-tcp** and **rdp-sxs** entries aren't listed at all, it means that there's a stack issue. Stack updates get installed along with agent updates, but if the update was unsuccessful, the Azure Virtual Desktop Listener won't work.
To resolve this issue: 1. Open the Registry Editor.+ 1. Go to **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations**.+ 1. Under **WinStations** you may see several folders for different stack versions, select a folder that matches the version information you saw when running `qwinsta.exe` in a command prompt.
- 1. Find **fReverseConnectMode** and make sure its data value is **1**. Also make sure that **fEnableWinStation** is set to **1**.
+ - Find **fReverseConnectMode** and make sure its data value is **1**. Also make sure that **fEnableWinStation** is set to **1**.
> [!div class="mx-imgBorder"] > ![Screenshot of fReverseConnectMode](media/fenable-2.png)
- 1. If **fReverseConnectMode** isn't set to **1**, select **fReverseConnectMode** and enter **1** in its value field.
- 1. If **fEnableWinStation** isn't set to **1**, select **fEnableWinStation** and enter **1** into its value field.
+ - If **fReverseConnectMode** isn't set to **1**, select **fReverseConnectMode** and enter **1** in its value field.
+ - If **fEnableWinStation** isn't set to **1**, select **fEnableWinStation** and enter **1** into its value field.
+ 1. Repeat the previous steps for each folder that matches the version information you saw when running `qwinsta.exe` in a command prompt. > [!TIP]
To resolve this issue:
> - Create a group policy object (GPO) that sets the registry key value for the machines that need the change. 1. Restart your session host VM.+ 1. Open the Registry Editor.+ 1. Go to **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\ClusterSettings**.+ 1. Under **ClusterSettings**, find **SessionDirectoryListener** and make sure its data value is `rdp-sxs<version number`, where `<version number` matches the version information you saw when running `qwinsta.exe` in a command prompt .
-1. If **SessionDirectoryListener** isn't set to `rdp-sxs<version number`, you'll need to follow the steps in the section [Your issue isn't listed here or wasn't resolved](#your-issue-isnt-listed-here-or-wasnt-resolved) below.
+
+2. If **SessionDirectoryListener** isn't set to `rdp-sxs<version number`, you'll need to follow the steps in the section [Your issue isn't listed here or wasn't resolved](#your-issue-isnt-listed-here-or-wasnt-resolved).
## Error: DownloadMsiException On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **DownloadMsiException** in the description, there isn't enough space on the disk for the RDAgent. To resolve this issue, make space on your disk by:
- - Deleting files that are no longer in user.
+
+ - Deleting files that are no longer in use.
- Increasing the storage capacity of your session host VM. ## Error: Agent fails to update with MissingMethodException
-On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3389 with **MissingMethodException: Method not found** in the description, this means the Azure Virtual Desktop agent didn't update successfully and reverted to an earlier version. This may be because the version number of the .NET framework currently installed on your VMs is lower than 4.7.2. To resolve this issue, you need to upgrade the .NET to version 4.7.2 or later by following the installation instructions in the [.NET Framework documentation](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2).
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3389 with **MissingMethodException: Method not found** in the description, then the Azure Virtual Desktop agent didn't update successfully and reverted to an earlier version. This issue may be happening because the version number of the .NET framework currently installed on your VMs is lower than 4.7.2. To resolve this issue, you need to upgrade the .NET to version 4.7.2 or later by following the installation instructions in the [.NET Framework documentation](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2).
## Error: Session host VMs are stuck in Upgrading state
If the status listed for session hosts in your host pool always says **Unavailab
To resolve this issue, first reinstall the side-by-side stack: 1. Sign in to your session host VM as an administrator.+ 1. From an elevated PowerShell prompt run `qwinsta.exe` and make note of the version number that appears next to **rdp-sxs** in the *SESSIONNAME* column. If the *STATE* column for **rdp-tcp** and **rdp-sxs** entries isn't **Listen**, or if **rdp-tcp** and **rdp-sxs** entries aren't listed at all, it means that there's a stack issue. 1. Run the following command to stop the RDAgentBootLoader service:
To resolve this issue, first reinstall the side-by-side stack:
``` 1. Go to **Control Panel** > **Programs** > **Programs and Features**, or on Windows 11 go to the **Settings App > Apps**.+ 1. Uninstall the latest version of the **Remote Desktop Services SxS Network Stack** or the version listed in Registry Editor in **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations** under the value for **ReverseConnectionListener**.+ 1. Back at the PowerShell prompt, run the following commands to add the file path of the latest installer available on your session host VM for the side-by-side stack to a variable and list its name: ```powershell
To resolve this issue, first reinstall the side-by-side stack:
``` 1. Restart your session host VM.
-1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you will need to [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
+
+1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you must [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
## Error: Session host VMs are stuck in Unavailable state
netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="
Your session host VMs may be at their connection limit and can't accept new connections. To resolve this issue, either:-- Decrease the max session limit. This ensures that resources are more evenly distributed across session hosts and will prevent resource depletion.+
+- Decrease the max session limit. This change ensures that resources are more evenly distributed across session hosts and prevent resource depletion.
- Increase the resource capacity of the session host VMs. ## Error: Operating a Pro VM or other unsupported OS
-The side-by-side stack is only supported by Windows Enterprise or Windows Server SKUs, which means that operating systems like Pro VM aren't. If you don't have an Enterprise or Server SKU, the stack will be installed on your VM but won't be activated, so you won't see it show up when you run **qwinsta** in your command line.
+The side-by-side stack is only supported by Windows Enterprise or Windows Server SKUs, which means that operating systems like Pro VM aren't. If you don't have an Enterprise or Server SKU, the stack installs on your VM but isn't activated, so it won't appear when you run **qwinsta** in your command line.
To resolve this issue, [create session host VMs](expand-existing-host-pool.md) using a [supported operating system](prerequisites.md#operating-systems-and-licenses).
To resolve this issue, [create session host VMs](expand-existing-host-pool.md) u
The name of your session host VM has already been registered and is probably a duplicate. To resolve this issue:+ 1. Follow the steps in the [Remove the session host from the host pool](#step-2-remove-the-session-host-from-the-host-pool) section.+ 1. [Create another VM](expand-existing-host-pool.md#add-virtual-machines-with-the-azure-portal). Make sure to choose a unique name for this VM.+ 1. Go to the [Azure portal](https://portal.azure.com) and open the **Overview** page for the host pool your VM was in. + 1. Open the **Session Hosts** tab and check to make sure all session hosts are in that host pool.+ 1. Wait for 5-10 minutes for the session host status to say **Available**. > [!div class="mx-imgBorder"]
To resolve this issue:
## Your issue isn't listed here or wasn't resolved
-If you can't find your issue in this article or the instructions didn't help you, we recommend you uninstall, reinstall, and re-register the Azure Virtual Desktop Agent. The instructions in this section will show you how to reregister your session host VM to the Azure Virtual Desktop service by:
-1. Uninstalling all agent, boot loader, and stack components
-1. Removing the session host from the host pool
-1. Generating a new registration key for the VM
-1. Reinstalling the Azure Virtual Desktop Agent and boot loader.
+If you can't find your issue in this article or the instructions didn't help you, we recommend you uninstall, reinstall, and re-register the Azure Virtual Desktop Agent. The instructions in this section show you how to reregister your session host VM to the Azure Virtual Desktop service by:
+
+1. Uninstalling all agent, boot loader, and stack components.
+
+2. Removing the session host from the host pool.
+
+3. Generating a new registration key for the VM.
+
+4. Reinstalling the Azure Virtual Desktop Agent and boot loader.
Follow these instructions in this section if one or more of the following scenarios apply to you:
Follow these instructions in this section if one or more of the following scenar
### Step 1: Uninstall all agent, boot loader, and stack component programs Before reinstalling the agent, boot loader, and stack, you must uninstall any existing components from your VM. To uninstall all agent, boot loader, and stack component programs:+ 1. Sign in to your session host VM as an administrator.
-2. Go to **Control Panel** > **Programs** > **Programs and Features**, or on Windows 11 go to the **Settings App > Apps**.
-3. Uninstall the following programs, then restart your session host VM:
+
+1. Go to **Control Panel** > **Programs** > **Programs and Features**, or on Windows 11 go to the **Settings App > Apps**.
+
+1. Uninstall the following programs, then restart your session host VM:
> [!CAUTION]
- > When uninstalling **Remote Desktop Services SxS Network Stack**, you'll be prompted that *Remote Desktop Services* and *Remote Desktop Services UserMode Port Redirector* should be closed. If you're connected to the session host VM using RDP, select **Do not close applications** then select **OK**, otherwise your RDP connection will be closed.
+ > When uninstalling **Remote Desktop Services SxS Network Stack**, you'll be prompted that *Remote Desktop Services* and *Remote Desktop Services UserMode Port Redirector* should be closed. If you're connected to the session host VM using RDP, select **Do not close applications** then select **OK**, otherwise your RDP connection won't work.
> > [!div class="mx-imgBorder"] > ![Screenshot showing prompt that Remote Desktop Services and Remote Desktop Services UserMode Port Redirector should be closed](media/uninstall-remote-desktop-services-sxs-network-stack.png)
Before reinstalling the agent, boot loader, and stack, you must uninstall any ex
### Step 2: Remove the session host from the host pool
-When you remove the session host from the host pool, the session host is no longer registered to that host pool. This acts as a reset for the session host registration. To remove the session host from the host pool:
+When you remove the session host from the host pool, the session host is no longer registered to that host pool. This change acts as a reset for the session host registration. To remove the session host from the host pool:
1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.+ 1. Select **Host pools** and select the name of the host pool that your session host VM is in.+ 1. Select **Session Hosts** to see the list of all session hosts in that host pool.+ 1. Look at the list of session hosts and tick the box next to the session host that you want to remove.+ 1. Select **Remove**. > [!div class="mx-imgBorder"]
When you remove the session host from the host pool, the session host is no long
### Step 3: Generate a new registration key for the VM You must generate a new registration key that is used to re-register your session VM to the host pool and to the service. To generate a new registration key for the VM:+ 1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.+ 1. Select **Host pools** and select the name of the host pool that your session host VM is in.+ 1. On the **Overview** blade, select **Registration key**. > [!div class="mx-imgBorder"] > ![Screenshot of registration key in portal](media/reg-key.png) 1. Open the **Registration key** tab and select **Generate new key**.+ 1. Enter the expiration date and then select **Ok**. > [!NOTE]
You must generate a new registration key that is used to re-register your sessio
### Step 4: Reinstall the agent and boot loader
-By reinstalling the most updated version of the agent and boot loader, the side-by-side stack and Geneva monitoring agent automatically get installed as well. To reinstall the agent and boot loader:
+Reinstalling the latest version of the agent and boot loader also automatically installs the side-by-side stack and Geneva monitoring agent. To reinstall the agent and boot loader:
1. Sign in to your session host VM as an administrator and run the agent installer and bootloader for your session host VM:
By reinstalling the most updated version of the agent and boot loader, the side-
> ![Screenshot of pasted registration token](media/pasted-agent-token.png) 1. Run the boot loader installer.+ 1. Restart your session VM. + 1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. In the search bar, enter **Azure Virtual Desktop** and select the matching service entry.+ 1. Select **Host pools** and select the name of the host pool that your session host VM is in.+ 1. Select **Session Hosts** to see the list of all session hosts in that host pool.+ 1. You should now see the session host registered in the host pool with the status **Available**. > [!div class="mx-imgBorder"]
This registry key prevents the agent from installing the side-by-side stack, whi
To resolve this issue, you'll need to remove the key: 1. Remove the DisableRegistryTools key from the three previously listed locations.+ 1. Uninstall and remove the affected side-by-side stack installation from the **Apps & Features** folder.+ 1. Remove the affected side-by-side stack's registry keys.+ 1. Restart your VM.+ 1. Start the agent and let it auto-install the side-by-side stack. ## Next steps
To resolve this issue, you'll need to remove the key:
If the issue continues, create a support case and include detailed information about the problem you're having and any actions you've taken to try to resolve it. The following list includes other resources you can use to troubleshoot issues in your Azure Virtual Desktop deployment. - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).-- To troubleshoot issues while creating a host pool in a Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
+- To troubleshoot issues while creating a host pool in an Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection.md). - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md).
virtual-desktop Troubleshoot Statuses Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md
Title: Azure Virtual Desktop session host statuses and health checks
description: How to troubleshoot the failed session host statuses and failed health checks Previously updated : 04/19/2023 Last updated : 04/21/2023 # Azure Virtual Desktop session host statuses and health checks
-The Azure Virtual Desktop Agent regularly runs health checks on the session host. The agent assigns these health checks various statuses that include descriptions of how to fix common issues. This article will tell you what each status means and how to act on them during a health check.
+The Azure Virtual Desktop Agent regularly runs health checks on the session host. The agent assigns these health checks various statuses that include descriptions of how to fix common issues. This article tells you what each status means and how to act on them during a health check.
## Session host statuses The following table lists all statuses for session hosts in the Azure portal each potential status. *Available* is considered the ideal default status. Any other statuses represent potential issues that you need to take care of to ensure the service works properly. >[!NOTE]
->If an issue is listed as "non-fatal," the service can still run with the issue active. However, we recommend you resolve the issue as soon as possible to prevent future issues. If an issue is listed as "fatal," then it will prevent the service from running. You must resolve all fatal issues to make sure your users can access the session host.
+>If an issue is listed as "non-fatal," the service can still run with the issue active. However, we recommend you resolve the issue as soon as possible to prevent future issues. If an issue is listed as "fatal," then it prevents the service from running. You must resolve all fatal issues to make sure your users can access the session host.
| Session host status | Description | How to resolve related issues | ||||
-|Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it will still be listed as ΓÇ£Available." |N/A|
+|Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it's still listed as ΓÇ£Available." |N/A|
|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. You can find which health checks have failed in the session hosts detailed view in the Azure portal. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
-|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status will change to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. |
+|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status changes to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. |
|Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.|
-|Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
-|Upgrading| This status means that the agent upgrade is in progress. This status will be updated to ΓÇ£AvailableΓÇ¥ once the upgrade is done and the session host can accept connections again.|If your session host has been stuck in the "Upgrading" state, then [reinstall the agent](troubleshoot-agent.md#error-session-host-vms-are-stuck-in-upgrading-state).|
+|Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This status doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
+|Upgrading| This status means that the agent upgrade is in progress. This status updates to ΓÇ£AvailableΓÇ¥ once the upgrade is done and the session host can accept connections again.|If your session host is stuck in the "Upgrading" state, then [reinstall the agent](troubleshoot-agent.md#error-session-host-vms-are-stuck-in-upgrading-state).|
## Health check
The health check is a test run by the agent on the session host. The following t
| Health check name | Description | What happens if the session host doesn't pass the check | |||| | Domain joined | Verifies that the session host is joined to a domain controller. | If this check fails, users won't be able to connect to the session host. To solve this issue, join your session host to a domain. |
-| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. |
+| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this issue, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. |
| Integrated Maintenance Data System (IMDS) reachable | Verifies that the service can't access the IMDS endpoint. | If this check fails, it's semi-fatal. There may be successful connections, but they won't contain logging information. To resolve this issue, you'll need to reconfigure your networking, firewall, or proxy settings. |
-| Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If this doesn't work, contact Microsoft support. |
-| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this, follow the directions in [Error: VMs are stuck in the Needs Assistance state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state). |
-| TURN (Traversal Using Relay NAT) Relay Access Health Check | When using [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks#how-rdp-shortpath-works) with an indirect connection, TURN uses User Datagram Protocol (UDP) to relay traffic between the client and session host through an intermediate server when direct connection isn't possible. | If this check fails, it's not fatal. Connections will revert to the websocket TCP and the session host will enter the "Needs assistance" state. To resolve the issue, follow the instructions in [Disable RDP shortpath on managed and unmanaged windows clients using group policy](configure-rdp-shortpath.md?tabs=public-networks#disable-rdp-shortpath-on-managed-and-unmanaged-windows-clients-using-group-policy). |
-| App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps will stop working for end-users. |
+| Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If restarting doesn't work, contact Microsoft support. |
+| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: VMs are stuck in the Needs Assistance state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state). |
+| TURN (Traversal Using Relay NAT) Relay Access Health Check | When using [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks#how-rdp-shortpath-works) with an indirect connection, TURN uses User Datagram Protocol (UDP) to relay traffic between the client and session host through an intermediate server when direct connection isn't possible. | If this check fails, it's not fatal. Connections revert to the websocket TCP and the session host enters the "Needs assistance" state. To resolve the issue, follow the instructions in [Disable RDP Shortpath on managed and unmanaged windows clients using group policy](configure-rdp-shortpath.md?tabs=public-networks#disable-rdp-shortpath-on-managed-and-unmanaged-windows-clients-using-group-policy). |
+| App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps stop working for end-users. |
| Domain reachable | Verifies the domain the session host is joined to is still reachable. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the domain. | | Domain trust check | Verifies the session host isn't experiencing domain trust issues that could prevent authentication when a user connects to a session. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the authentication domain for the session host. | | FSLogix health check | Verifies the FSLogix service is up and running to make sure user profiles are loading properly in the session. | If this check fails, it's fatal. Even if the connection succeeds, the profile won't load, forcing the user to use a temporary profile instead. | | Metadata service check | Verifies the metadata service is accessible and returns compute properties. | If this check fails, it isn't fatal. |
-| Monitoring agent check | Verifies that the required monitoring agent is running. | If this check fails, it isn't fatal. Connections will still work, but the monitoring agent will either be missing or running an earlier version. |
-| Supported encryption check | Checks the value of the SecurityLayer registration key. | If the key's value is 0, the check will fail and is fatal. If the value is 1, the check will fail but be non-fatal. |
+| Monitoring agent check | Verifies that the required monitoring agent is running. | If this check fails, it isn't fatal. Connections still work, but the monitoring agent is either missing or running an earlier version. |
+| Supported encryption check | Checks the value of the SecurityLayer registration key. | If the key's value is 0, the check fails and is fatal. If the value is 1, the check fails but is non-fatal. |
| Agent provisioning service health check | Verifies the provisioning status of the Azure Virtual Desktop agent installation. | If this check fails, it's fatal. | | Stack provisioning service health check | Verifies the provisioning status of the Azure Virtual Desktop Stack installation. | If this check fails, it's fatal. | | Monitoring agent provisioning service health check | Verifies the provisioning status of the Monitoring agent installation | If this check fails, it's fatal. |
virtual-desktop Whats New Msixmgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-msixmgr.md
In this release, we've made the following changes:
## Next steps -- Learn how to [use the MSIXMGR tool](app-attach-msixmgr.md).
+To learn more about the MSIXMGR tool, check out these articles:
+
+- [Using the MSIXMGR tool](app-attach-msixmgr.md).
+- [MSIXMGR tool parameters](msixmgr-tool-syntax-description.md)
virtual-machine-scale-sets Tutorial Modify Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md
The exact presentation of the output depends on the options you provide to the c
}, "storageProfile": { "imageReference": {
- "offer": "UbuntuServer",
+ "offer": "0001-com-ubuntu-server-jammy",
"publisher": "Canonical",
- "sku": "18.04-LTS",
+ "sku": "22_04-lts",
"version": "latest" }, "osDisk": {
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --orchestration-mode flexible \
- --image UbuntuLTS \
+ --image RHEL \
--admin-username azureuser \ --generate-ssh-keys \ --upgrade-policy Rolling \
The exact presentation of the output depends on the options you provide to the c
"storageProfile": { "dataDisks": [], "imageReference": {
- "exactVersion": "18.04.202210180",
- "offer": "UbuntuServer",
+ "exactVersion": "22.04.202204200",
+ "offer": "0001-com-ubuntu-server-jammy",
"publisher": "Canonical",
- "sku": "18.04-LTS",
+ "sku": "22_04-lts",
"version": "latest" }, "osDisk": {
Running [az vm show](/cli/azure/vm#az-vm-show) again, we now will see that the V
There are times where you might want to add a new VM to your scale set but want different configuration options than then listed in the scale set model. VMs can be added to a scale set during creation by using the [az vm create](/cli/azure/vmss#az-vmss-create) command and specifying the scale set name you want the instance added to. ```azurecli-interactive
-az vm create --name myNewInstance --resource-group myResourceGroup --vmss myScaleSet --image UbuntuLTS
+az vm create --name myNewInstance --resource-group myResourceGroup --vmss myScaleSet --image RHEL
``` ```output
az vmss reimage --resource-group myResourceGroup --name myScaleSet --instance-id
``` ## Update the OS image for your scale set
-You may have a scale set that runs an old version of Ubuntu LTS 18.04. You want to update to a newer version of Ubuntu LTS 16.04, such as version *18.04.202210180*. The image reference version property isn't part of a list, so you can directly modify these properties using [az vmss update](/cli/azure/vmss#az-vmss-update).
+You may have a scale set that runs an old version of Ubuntu. You want to update to a newer version of Ubuntu, such as version *22.04.202204200*. The image reference version property isn't part of a list, so you can directly modify these properties using [az vmss update](/cli/azure/vmss#az-vmss-update).
```azurecli
-az vmss update --resource-group myResourceGroup --name myScaleSet --set virtualMachineProfile.storageProfile.imageReference.version=18.04.202210180
+az vmss update --resource-group myResourceGroup --name myScaleSet --set virtualMachineProfile.storageProfile.imageReference.version=22.04.202204200
``` Alternatively, you may want to change the image your scale set uses. For example, you may want to update or change a custom image used by your scale set. You can change the image your scale set uses by updating the image reference ID property. The image reference ID property isn't part of a list, so you can directly modify this property using [az vmss update](/cli/azure/vmss#az-vmss-update).
virtual-machines Azure Cli Change Subscription Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-cli-change-subscription-marketplace.md
Title: Azure CLI sample for moving a Marketplace Azure VM to another subscriptio
description: Azure CLI sample for moving an Azure Marketplace Virtual Machine to a different subscription. - Previously updated : 01/29/2021+ Last updated : 04/20/2023 ms.devlang: azurecli
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/custom-data.md
Azure currently supports two provisioning agents:
### Can I update custom data after the VM has been created? For single VMs, you can't update custom data in the VM model. But for Virtual Machine Scale Sets, you can update custom data. For more information, see [Modify a Scale Set](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-update-global-scale-set-properties). When you update custom data in the model for a Virtual Machine Scale Set:
-* Existing instances in the scale set don't get the updated custom data until they're reimaged.
-* Existing instances in the scale set that is upgraded don't get the updated custom data.
+* Existing instances in the scale set don't get the updated custom data until they're updated to the lastest model and reimaged.
* New instances receive the new custom data. ### Can I place sensitive values in custom data?
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md
Title: Overview of managed disk encryption options description: Overview of managed disk encryption options Previously updated : 03/28/2023 Last updated : 04/05/2023
There are several types of encryption available for your managed disks, includin
- **Encryption at host** is a Virtual Machine option that enhances Azure Disk Storage Server-Side Encryption to ensure that all temp disks and disk caches are encrypted at rest and flow encrypted to the Storage clusters. For full details, see [Encryption at host - End-to-end encryption for your VM data](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). -- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the [DM-Crypt](https://wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](https://wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md).
+- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the [DM-Crypt](https://wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](https://wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets, with the option to encrypt with a key encryption key (KEK). For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md).
- **Confidential disk encryption** binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#confidential-os-disk-encryption).
Here's a comparison of Disk Storage SSE, ADE, encryption at host, and Confidenti
| Temp disk encryption | &#10060; | &#x2705; | &#x2705; | &#10060; | | Encryption of caches | &#10060; | &#x2705; | &#x2705; | &#x2705; | | Data flows encrypted between Compute and Storage | &#10060; | &#x2705; | &#x2705; | &#x2705; |
-| Customer control of keys | &#x2705; When configured with DES | &#x2705; When configured with DES | &#x2705; | &#x2705; |
+| Customer control of keys | &#x2705; When configured with DES | &#x2705; When configured with DES | &#x2705; When configured with KEK | &#x2705; When configured with DES |
| Does not use your VM's CPU | &#x2705; | &#x2705; | &#10060; | &#10060; | | Works for custom images | &#x2705; | &#x2705; | &#10060; Does not work for custom Linux images | &#x2705; | | Enhanced Key Protection | &#10060; | &#10060; | &#10060; | &#x2705; |
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
The following table provides a comparison of the five disk types to help you dec
| - | - | -- | | | | | **Disk type** | SSD | SSD |SSD | SSD | HDD | | **Scenario** | IO-intensive workloads such as [SAP HANA](workloads/sap/hana-vm-operations-storage.md), top tier databases (for example, SQL, Oracle), and other transaction-heavy workloads. | Production and performance-sensitive workloads that consistently require low latency and high IOPS and throughput | Production and performance sensitive workloads | Web servers, lightly used enterprise applications and dev/test | Backup, non-critical, infrequent access |
-| **Max disk size** | 65,536 gigabytes (GiB) | 65,536 GiB |32,767 GiB | 32,767 GiB | 32,767 GiB |
+| **Max disk size** | 65,536 GiB | 65,536 GiB |32,767 GiB | 32,767 GiB | 32,767 GiB |
| **Max throughput** | 4,000 MB/s | 1,200 MB/s | 900 MB/s | 750 MB/s | 500 MB/s | | **Max IOPS** | 160,000 | 80,000 | 20,000 | 6,000 | 2,000, 3,000* | | **Usable as OS Disk?** | No | No | Yes | Yes | Yes |
It's possible for a performance resize operation to fail because of a lack of pe
[!INCLUDE [managed-disks-ultra-disks-GA-scope-and-limitations](../../includes/managed-disks-ultra-disks-GA-scope-and-limitations.md)]
-If you would like to start using ultra disks, see the article on [using Azure ultra disks](disks-enable-ultra-ssd.md).
+If you would like to start using ultra disks, see the article on [using Azure Ultra Disks](disks-enable-ultra-ssd.md).
## Premium SSD v2
-Azure Premium SSD v2 is designed for IO-intense enterprise workloads that require consistent sub-millisecond disk latencies and high IOPS and throughput at a low cost. The performance (capacity, throughput, and IOPS) of Premium SSD v2 disks can be independently configured at any time, making it easier for more scenarios to be cost efficient while meeting performance needs. For example, a transaction-intensive database workload may need a large amount of IOPS at a small size, or a gaming application may need a large amount of IOPS during peak hours. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data/analytics, and gaming, on virtual machines or stateful containers.
+Premium SSD v2 offers higher performance than Premium SSDs while generally being less costly than Ultra Disks. You can individually tweak the performance (capacity, throughput, and IOPS) of Premium SSD v2 disks at any time, allowing workloads to be cost efficient while meeting shifting performance needs. For example, a transaction-intensive database may need a large amount of IOPS at a small size, or a gaming application may need a large amount of IOPS but only during peak hours. Because of this, for most general purpose workloads, Premium SSD v2 can provide the best price performance.
+
+Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data/analytics, and gaming, on virtual machines or stateful containers.
Premium SSD v2 support a 4k physical sector size by default, but can be configured to use a 512E sector size as well. While most applications are compatible with 4k sector sizes, some require 512 byte sector sizes. Oracle Database, for example, requires release 12.2 or later in order to support 4k native disks. For older versions of Oracle DB, 512 byte sector size is required.
Unlike Premium SSDs, Premium SSD v2 doesn't have dedicated sizes. You can set a
### Premium SSD v2 performance
-With Premium SSD v2 disks, you can individually set the capacity, throughput, and IOPS of a disk based on your workload needs, providing you more flexibility and reduced costs. Each of these values determine the cost of your disk.
+With Premium SSD v2 disks, you can individually set the capacity, throughput, and IOPS of a disk based on your workload needs, providing you with more flexibility and reduced costs. Each of these values determines the cost of your disk.
#### Premium SSD v2 capacities
All Premium SSD v2 disks have a baseline IOPS of 3000 that is free of charge. Af
#### Premium SSD v2 throughput
-All Premium SSD v2 disks have a baseline throughput of 125 MB/s, that is free of charge. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the max throughput it can set is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increased the IOPS to 4,000, then the max throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more. Increasing your throughput beyond 125 increases the price of your disk.
+All Premium SSD v2 disks have a baseline throughput of 125 MB/s that is free of charge. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the max throughput it can set is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increased the IOPS to 4,000, then the max throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more. Increasing your throughput beyond 125 increases the price of your disk.
#### Premium SSD v2 Sector Sizes Premium SSD v2 supports a 4k physical sector size by default. A 512E sector size is also supported. While most applications are compatible with 4k sector sizes, some require 512-byte sector sizes. Oracle Database, for example, requires release 12.2 or later in order to support 4k native disks. For older versions of Oracle DB, 512-byte sector size is required.
Refer to the [Azure Disks pricing page](https://azure.microsoft.com/pricing/deta
### Azure disk reservation
-Disk reservation provides you a discount on the advance purchase of one year's of disk storage, reducing your total cost. When you purchase a disk reservation, you select a specific disk SKU in a target region. For example, you may choose five P30 (1 TiB) Premium SSDs in the Central US region for a one year term. The disk reservation experience is similar to Azure reserved VM instances. You can bundle VM and Disk reservations to maximize your savings. For now, Azure Disks Reservation offers one year commitment plan for Premium SSD SKUs from P30 (1 TiB) to P80 (32 TiB) in all production regions. For more information about reserved disks pricing, see [Azure Disks pricing page](https://azure.microsoft.com/pricing/details/managed-disks/).
+Disk reservation provides you with a discount on the advance purchase of one year's of disk storage, reducing your total cost. When you purchase a disk reservation, you select a specific disk SKU in a target region. For example, you may choose five P30 (1 TiB) Premium SSDs in the Central US region for a one year term. The disk reservation experience is similar to Azure reserved VM instances. You can bundle VM and Disk reservations to maximize your savings. For now, Azure Disks Reservation offers one year commitment plan for Premium SSD SKUs from P30 (1 TiB) to P80 (32 TiB) in all production regions. For more information about reserved disks pricing, see [Azure Disks pricing page](https://azure.microsoft.com/pricing/details/managed-disks/).
## Next steps
virtual-machines Diagnostics Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-template.md
+ Previously updated : 05/31/2017 Last updated : 04/21/2023 # Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates The Azure Diagnostics Extension provides the monitoring and diagnostics capabilities on a Windows-based Azure virtual machine. You can enable these capabilities on the virtual machine by including the extension as part of the Azure Resource Manager template. See [Authoring Azure Resource Manager Templates with VM Extensions](../windows/template-description.md#extensions) for more information on including any extension as part of a virtual machine template. This article describes how you can add the Azure Diagnostics extension to a windows virtual machine template.
The Azure Diagnostics Extension provides the monitoring and diagnostics capabili
## Add the Azure Diagnostics extension to the VM resource definition To enable the diagnostics extension on a Windows Virtual Machine, you need to add the extension as a VM resource in the Resource Manager template.
-For a simple Resource Manager based Virtual Machine add the extension configuration to the *resources* array for the Virtual Machine:
+For a simple Resource Manager based Virtual Machine, add the extension configuration to the *resources* array for the Virtual Machine:
```json "resources": [
The *publisher* property with the value of **Microsoft.Azure.Diagnostics** and t
The value of the *name* property can be used to refer to the extension in the resource group. Setting it specifically to **Microsoft.Insights.VMDiagnosticsSettings** enables it to be easily identified by the Azure portal ensuring that the monitoring charts show up correctly in the Azure portal.
-The *typeHandlerVersion* specifies the version of the extension you would like to use. Setting *autoUpgradeMinorVersion* minor version to **true** ensures that you get the latest Minor version of the extension that is available. It is highly recommended that you always set *autoUpgradeMinorVersion* to always be **true** so that you always get to use the latest available diagnostics extension with all the new features and bug fixes.
+The *typeHandlerVersion* specifies the version of the extension you would like to use. Setting *autoUpgradeMinorVersion* minor version to **true** ensures that you get the latest Minor version of the extension that is available. It's highly recommended that you always set *autoUpgradeMinorVersion* to always be **true** so that you always get to use the latest available diagnostics extension with all the new features and bug fixes.
-The *settings* element contains configurations properties for the extension that can be set and read back from the extension (sometimes referred to as public configuration). The *xmlcfg* property contains xml based configuration for the diagnostics logs, performance counters etc that are collected by the diagnostics agent. See [Diagnostics Configuration Schema](../../azure-monitor/agents/diagnostics-extension-schema-windows.md) for more information about the xml schema itself. A common practice is to store the actual xml configuration as a variable in the Azure Resource Manager template and then concatenate and base64 encode them to set the value for *xmlcfg*. See the section on [diagnostics configuration variables](#diagnostics-configuration-variables) to understand more about how to store the xml in variables. The *storageAccount* property specifies the name of the storage account to which diagnostics data is transferred.
+The *settings* element contains configurations properties for the extension that can be set and read back from the extension (sometimes referred to as public configuration). The *xmlcfg* property contains xml based configuration for the diagnostics logs, performance counters etc. that are collected by the diagnostics agent. See [Diagnostics Configuration Schema](../../azure-monitor/agents/diagnostics-extension-schema-windows.md) for more information about the xml schema itself. A common practice is to store the actual xml configuration as a variable in the Azure Resource Manager template and then concatenate and base64 encode them to set the value for *xmlcfg*. See the section on [diagnostics configuration variables](#diagnostics-configuration-variables) to understand more about how to store the xml in variables. The *storageAccount* property specifies the name of the storage account to which diagnostics data is transferred.
-The properties in *protectedSettings* (sometimes referred to as private configuration) can be set but cannot be read back after being set. The write-only nature of *protectedSettings* makes it useful for storing secrets like the storage account key where the diagnostics data is written.
+The properties in *protectedSettings* (sometimes referred to as private configuration) can be set but can't be read back after being set. The write-only nature of *protectedSettings* makes it useful for storing secrets like the storage account key where the diagnostics data is written.
## Specifying diagnostics storage account as parameters The diagnostics extension json snippet above assumes two parameters *existingdiagnosticsStorageAccountName* and
The diagnostics extension json snippet above assumes two parameters *existingdia
} ```
-It is best practice to specify a diagnostics storage account in a different resource group than the resource group for the virtual machine. A resource group can be considered to be a deployment unit with its own lifetime, a virtual machine can be deployed and redeployed as new configurations updates are made it to it but you may want to continue storing the diagnostics data in the same storage account across those virtual machine deployments. Having the storage account in a different resource enables the storage account to accept data from various virtual machine deployments making it easy to troubleshoot issues across the various versions.
+It's best practice to specify a diagnostics storage account in a different resource group than the resource group for the virtual machine. A resource group can be considered to be a deployment unit with its own lifetime, a virtual machine can be deployed and redeployed as new configurations updates are made it to it but you may want to continue storing the diagnostics data in the same storage account across those virtual machine deployments. Having the storage account in a different resource enables the storage account to accept data from various virtual machine deployments making it easy to troubleshoot issues across the various versions.
> [!NOTE] > If you create a windows virtual machine template from Visual Studio, the default storage account might be set to use the same storage account where the virtual machine VHD is uploaded. This is to simplify initial setup of the VM. Re-factor the template to use a different storage account that can be passed in as a parameter.
The following example shows the xml for metrics definitions:
</Metrics> ```
-The *resourceID* attribute uniquely identifies the virtual machine in your subscription. Make sure to use the subscription() and resourceGroup() functions so that the template automatically updates those values based on the subscription and resource group you are deploying to.
+The *resourceID* attribute uniquely identifies the virtual machine in your subscription. Make sure to use the subscription() and resourceGroup() functions so that the template automatically updates those values based on the subscription and resource group you're deploying to.
-If you are creating multiple Virtual Machines in a loop, you have to populate the *resourceID* value with an copyIndex() function to correctly differentiate each individual VM. The *xmlCfg* value can be updated to support this as follows:
+If you're creating multiple Virtual Machines in a loop, you have to populate the *resourceID* value with an copyIndex() function to correctly differentiate each individual VM. The *xmlCfg* value can be updated to support this as follows:
```json "xmlCfg": "[base64(concat(variables('wadcfgxstart'), variables('wadmetricsresourceid'), concat(parameters('vmNamePrefix'), copyindex()), variables('wadcfgxend')))]",
The Metrics configuration above generates tables in your diagnostics storage acc
* **WADMetrics**: Standard prefix for all WADMetrics tables * **PT1H** or **PT1M**: Signifies that the table contains aggregate data over 1 hour or 1 minute
-* **P10D**: Signifies the table will contain data for 10 days from when the table started collecting data
+* **P10D**: Signifies the table contains data for 10 days from when the table started collecting data
* **V2S**: String constant * **yyyymmdd**: The date at which the table started collecting data
Example: *WADMetricsPT1HP10DV2S20151108* contains metrics data aggregated over a
Each WADMetrics table contains the following columns: * **PartitionKey**: The partition key is constructed based on the *resourceID* value to uniquely identify the VM resource. For example: `002Fsubscriptions:<subscriptionID>:002FresourceGroups:002F<ResourceGroupName>:002Fproviders:002FMicrosoft:002ECompute:002FvirtualMachines:002F<vmName>`
-* **RowKey**: Follows the format `<Descending time tick>:<Performance Counter Name>`. The descending time tick calculation is max time ticks minus the time of the beginning of the aggregation period. For example if the sample period started on 10-Nov-2015 and 00:00Hrs UTC then the calculation would be: `DateTime.MaxValue.Ticks - (new DateTime(2015,11,10,0,0,0,DateTimeKind.Utc).Ticks)`. For the memory available bytes performance counter the row key will look like: `2519551871999999999__:005CMemory:005CAvailable:0020Bytes`
+* **RowKey**: Follows the format `<Descending time tick>:<Performance Counter Name>`. The descending time tick calculation is max time ticks minus the time of the beginning of the aggregation period. For example if the sample period started on 10-Nov-2015 and 00:00Hrs UTC then the calculation would be: `DateTime.MaxValue.Ticks - (new DateTime(2015,11,10,0,0,0,DateTimeKind.Utc).Ticks)`. For the memory available bytes performance, counter the row key looks like: `2519551871999999999__:005CMemory:005CAvailable:0020Bytes`
* **CounterName**: Is the name of the performance counter. This matches the *counterSpecifier* defined in the xml config. * **Maximum**: The maximum value of the performance counter over the aggregation period. * **Minimum**: The minimum value of the performance counter over the aggregation period.
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
vm-linux Previously updated : 03/13/2023 Last updated : 04/21/2023
An extension is also available to install InfiniBand drivers for [Windows VMs](h
### Operating system
-This extension supports the following OS distros, depending on driver support for specific OS version.
+This extension supports the following OS distros, depending on driver support for specific OS version. For latest list of supported OS and driver versions, refer to [resources.json](https://github.com/Azure/azhpc-extensions/blob/master/InfiniBand/resources.json)
| Distribution | Version | InfiniBand NIC drivers | ||||
This extension supports the following OS distros, depending on driver support fo
| CentOS | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 | | Red Hat Enterprise Linux | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 |
-For latest list of supported OS and driver versions, refer to [resources.json](https://github.com/Azure/azhpc-extensions/blob/master/InfiniBand/resources.json)
+> [!IMPORTANT]
+> This document references a release version of Linux that is nearing or at, End of Life (EOL). Please consider updating to a more current version.
### Internet connectivity
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Previously updated : 03/27/2023 Last updated : 04/20/2023
The Network Watcher Agent extension can be configured for the following Linux di
| OpenSUSE Leap | 42.3+ | | CentOS | 6.10 and 7 |
-> [!IMPORTANT]
-> Keep in consideration Red Hat Enterprise Linux 6.X and Oracle Linux 6.x is already EOL.
-> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
-> Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf), which [will end on 07/2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
+> [!NOTE]
+> Red Hat Enterprise Linux 6.X and Oracle Linux 6.x have reached their end-of-life (EOL).
+> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet) through [June 30, 2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
+> Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf) through [July 1, 2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
### Internet connectivity
virtual-machines Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/troubleshoot.md
+ Previously updated : 03/29/2016 Last updated : 04/21/2023 # Troubleshooting Azure Windows VM extension failures [!INCLUDE [virtual-machines-common-extensions-troubleshoot](../../../includes/virtual-machines-common-extensions-troubleshoot.md)]
virtual-machines Hb Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-overview.md
description: Learn about the preview support for the HB-series VM size in Azure.
Previously updated : 03/04/2023 Last updated : 04/20/2023
The following diagram shows the segregation of cores reserved for Azure Hypervis
| MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH, Platform MPI | | Additional Frameworks | UCX, libfabric, PGAS | | Azure Storage Support | Standard and Premium Disks (maximum 4 disks) |
-| OS Support for SRIOV RDMA | CentOS/RHEL 7.6+, Ubuntu 16.04+, SLES 12 SP4+, WinServer 2016+ |
+| OS Support for SRIOV RDMA | CentOS/RHEL 7.6+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ |
| Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
+> [!IMPORTANT]
+> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version.
+ ## Next steps - Learn more about [AMD EPYC architecture](https://bit.ly/2Epv3kC) and [multi-chip architectures](https://bit.ly/2GpQIMb). For more detailed information, see the [HPC Tuning Guide for AMD EPYC Processors](https://bit.ly/2T3AWZ9).
virtual-machines Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/isolation.md
Previously updated : 11/05/2020 Last updated : 04/20/2023 -+ # Virtual machine isolation in Azure
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets [!INCLUDE [virtual-machines-common-isolation](../../includes/virtual-machines-common-isolation.md)]-
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
Previously updated : 04/25/2022 Last updated : 04/06/2023 # Connect to a Linux VM
-In Azure there are multiple ways to connect to a Linux virtual machine. The most common practice for connecting to a Linux VM is using the Secure Shell Protocol (SSH). This is done via any standard SSH client commonly found in Linux and Windows. You can also use [Azure Cloud Shell](../cloud-shell/overview.md) from any browser.
+When hosting a Linux virtual machine on Azure, the most common method for accessing that VM is through the Secure Shell Protocol (SSH). Any standard SSH client commonly found in Linux and Windows allows you to connect. You can also use [Azure Cloud Shell](../cloud-shell/overview.md) from any browser.
This document describes how to connect, via SSH, to a VM that has a public IP. If you need to connect to a VM without a public IP, see [Azure Bastion Service](../bastion/bastion-overview.md). ## Prerequisites -- You need an SSH key pair. If you don't already have one, Azure will create a key pair during the deployment process. If you need help with creating one manually, see [Create and use an SSH public-private key pair for Linux VMs in Azure](./linux/mac-create-ssh-keys.md).-- You need an existing Network Security Group (NSG). Most VMs will have an NSG by default, but if you don't already have one you can create one and attach it manually. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).-- To connect to a Linux VM, you need the appropriate port open. Typically this will be port 22. The following instructions assume port 22 but the process is the same for other port numbers. You can validate an appropriate port is open for SSH using the troubleshooter or by checking manually in your VM settings. To check if port 22 is open:
+- You need an SSH key pair. If you don't already have one, Azure creates a key pair during the deployment process. If you need help with creating one manually, see [Create and use an SSH public-private key pair for Linux VMs in Azure](./linux/mac-create-ssh-keys.md).
+- You need an existing Network Security Group (NSG). Most VMs have an NSG by default, but if you don't already have one you can create one and attach it manually. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
+- To connect to a Linux VM, you need the appropriate port open. Typically SSH uses port 22. The following instructions assume port 22 but the process is the same for other port numbers. You can validate an appropriate port is open for SSH using the troubleshooter or by checking manually in your VM settings. To check if port 22 is open:
1. On the page for the VM, select **Networking** from the left menu.
- 1. On the **Networking** page, check to see if there is a rule which allows TCP on port 22 from the IP address of the computer you are using to connect to the VM. If the rule exists, you can move to the next section.
+ 1. On the **Networking** page, check to see if there's a rule that allows TCP on port 22 from the IP address of the computer you are using to connect to the VM. If the rule exists, you can move to the next section.
- :::image type="content" source="media/linux-vm-connect/check-rule.png" alt-text="Screenshot showing how to check to see if there is already a rule allowing S S H connections.":::
+ :::image type="content" source="media/linux-vm-connect/check-rule.png" alt-text="Screenshot showing how to check to see if there's already a rule allowing S S H connections.":::
1. If there isn't a rule, add one by selecting **Add inbound port rule**. 1. For **Service**, select **SSH** from the dropdown.
This document describes how to connect, via SSH, to a VM that has a public IP. I
- Your VM must have a public IP address. To check if your VM has a public IP address, select **Overview** from the left menu and look at the **Networking** section. If you see an IP address next to **Public IP address**, then your VM has a public IP
- If your VM does not have a public IP Address, it will look like this:
+ If your VM doesn't have a public IP Address, it looks like this:
:::image type="content" source="media/linux-vm-connect/no-public-ip.png" alt-text="Screenshot of how the networking section looks when you do not have a public I P.":::
This document describes how to connect, via SSH, to a VM that has a public IP. I
## Connect to the VM
-Once the above prerequisites are met, you are ready to connect to your VM. Open your SSH client of choice. The SSH client command is typically included in Linux, macOS, and Windows. If you are using Windows 7 or older, where Win32 OpenSSH is not included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser.
+Once the above prerequisites are met, you're ready to connect to your VM. Open your SSH client of choice. The SSH client command is typically included in Linux, macOS, and Windows. If you're using Windows 7 or older, where Win32 OpenSSH isn't included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser.
> [!NOTE] > The following examples assume the SSH key is in the key.pem format. If you used CLI or Azure PowerShell to download your keys, they may be in the id_rsa format.
Once the above prerequisites are met, you are ready to connect to your VM. Open
### SSH with a new key pair 1. Ensure your public and private keys are in the correct directory. The directory is usually `~/.ssh`.
- If you generated keys manually or generated them with the CLI, then the keys are probably already there. However, if you downloaded them in pem format from the Azure portal, you may need to move them to the right location. This can be done with the following syntax: `mv PRIVATE_KEY_SOURCE PRIVATE_KEY_DESTINATION`
+ If you generated keys manually or generated them with the CLI, then the keys are probably already there. However, if you downloaded them in pem format from the Azure portal, you may need to move them to the right location. Moving the keys is done with the following syntax: `mv PRIVATE_KEY_SOURCE PRIVATE_KEY_DESTINATION`
For example, if the key is in the `Downloads` folder, and `myKey.pem` is the name of your SSH key, type: ```bash
Once the above prerequisites are met, you are ready to connect to your VM. Open
``` 4. Validate the returned fingerprint.
- If you have never connected to this VM before, you'll be asked to verify the hosts fingerprint. It's tempting to simply accept the fingerprint presented, but that exposes you to a potential person in the middle attack. You should always validate the hosts fingerprint. You only need to do this the first time you connect from a client. To get the host fingerprint via the portal, use the Run Command feature to execute the command:
+ If you've never connected to this VM before, you're asked to verify the hosts fingerprint. It's tempting to simply accept the fingerprint presented, but that exposes you to a potential person in the middle attack. You should always validate the hosts fingerprint. You only need to do this the first time you connect from a client. To get the host fingerprint via the portal, use the Run Command feature to execute the command:
```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'
Once the above prerequisites are met, you are ready to connect to your VM. Open
``` 2. Validate the returned fingerprint.
- If you have never connected to this VM before you will be asked to verify the hosts fingerprint. It is tempting to simply accept the fingerprint presented, however, this exposes you to a possible person in the middle attack. You should always validate the hosts fingerprint. You only need to do this on the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
+ If you've never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-3. Success! You should now be connected to your VM. If you're unable to connect, see our troubleshooting guide [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+3. Success! You should now be connected to your VM. If you're unable to connect, see our [troubleshooting guide](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
### Password authentication > [!WARNING]
-> This type of authentication method is not as secure and is not recommended.
+> This type of authentication method is not as secure as an SSH key pair and is not recommended.
1. Run the following command in your SSH client. In this example, *20.51.230.13* is the public IP Address of your VM and *azureuser* is the username you created when you created the VM.
Once the above prerequisites are met, you are ready to connect to your VM. Open
2. Validate the returned fingerprint.
- If you have never connected to this VM before you will be asked to verify the hosts fingerprint. It is tempting to simply accept the fingerprint presented, however, this exposes you to a possible person in the middle attack. You should always validate the hosts fingerprint. You only need to do this on the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
+ If you've never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-3. Success! You should now be connected to your VM. If you're unable to connect using the correct method above, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+3. Success! You should now be connected to your VM. If you're unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
## [Windows command line (cmd.exe, PowerShell etc.)](#tab/Windows)
Once the above prerequisites are met, you are ready to connect to your VM. Open
``` 3. Validate the returned fingerprint.
- If you have never connected to this VM before you will be asked to verify the hosts fingerprint. It is tempting to simply accept the fingerprint presented, however, this exposes you to a possible person in the middle attack. You should always validate the hosts fingerprint. You only need to do this on the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
+ If you've never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
```azurepowershell-interactive Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunPowerShellScript' -ScriptString
Once the above prerequisites are met, you are ready to connect to your VM. Open
2. Validate the returned fingerprint.
- If you have never connected to this VM before you will be asked to verify the hosts fingerprint. It is tempting to simply accept the fingerprint presented, however, this exposes you to a potential person in the middle attack. You should always validate the hosts fingerprint. You only need to do this on the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
+ If you've never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Previously updated : 12/16/2022 Last updated : 04/06/2023 + # Install NVIDIA GPU drivers on N-series VMs running Linux
To install CUDA drivers, make an SSH connection to each VM. To verify that the s
```bash lspci | grep -i NVIDIA ```
-You will see output similar to the following example (showing an NVIDIA Tesla K80 card):
+Output is similar to the following example (showing an NVIDIA Tesla K80 card):
![lspci command output](./media/n-series-driver-setup/lspci.png)
-lspci lists the PCIe devices on the VM, including the InfiniBand NIC and GPUs, if any. If lspci doesn't return successfully, you may need to install LIS on CentOS/RHEL (instructions below).
+lspci lists the PCIe devices on the VM, including the InfiniBand NIC and GPUs, if any. If lspci doesn't return successfully, you may need to install LIS on CentOS/RHEL.
+ Then run installation commands specific for your distribution. ### Ubuntu 1. Download and install the CUDA drivers from the NVIDIA website. > [!NOTE]
- > The example below shows the CUDA package path for Ubuntu 20.04. Replace the path specific to the version you plan to use.
+ > The example shows the CUDA package path for Ubuntu 20.04. Replace the path specific to the version you plan to use.
> > Visit the [NVIDIA Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) or the [NVIDIA CUDA Resources page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_network) for the full path specific to each version. >
sudo reboot
### CentOS or Red Hat Enterprise Linux
-1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
+1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel`, and `dkms` are appropriate for your kernel.
``` sudo yum install kernel kernel-tools kernel-headers kernel-devel sudo reboot ```
-2. Install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.
+2. Install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS isn't required.
- Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation](https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
+ LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Refer to the [Linux Integration Services documentation](https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions. ```bash
sudo reboot
> Visit [Fedora](https://dl.fedoraproject.org/pub/epel/) and [Nvidia CUDA repo](https://developer.download.nvidia.com/compute/cuda/repos/) to pick the correct package for the CentOS or RHEL version you want to use. >
-For example, CentOS 8 and RHEL 8 will need the following steps.
+For example, CentOS 8 and RHEL 8 need the following steps.
```bash sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
For example, CentOS 8 and RHEL 8 will need the following steps.
To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
-If the driver is installed, you will see output similar to the following. Note that **GPU-Util** shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
+If the driver is installed, Nvidia SMI lists the **GPU-Util** as 0% until you run a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
![NVIDIA device status](./media/n-series-driver-setup/smi.png) ## RDMA network connectivity
-RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine (VM) scale set. The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Intel MPI 5.x or a later version. Additional requirements follow:
+RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine (VM) scale set. The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Intel MPI 5.x or a later version:
### Distributions
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
sudo apt-get install build-essential ubuntu-desktop -y sudo apt-get install linux-azure -y ```
-3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NVv2 VMs.) To do this, create a file in `/etc/modprobe.d` named `nouveau.conf` with the following contents:
+3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NVv2 VMs.) To disable the driver, create a file in `/etc/modprobe.d` named `nouveau.conf` with the following contents:
``` blacklist nouveau
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
EnableUI=FALSE ```
-9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
+9. Remove the following from `/etc/nvidia/gridd.conf` if its present:
``` FeatureType=0
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
blacklist lbm-nouveau ```
-3. Reboot the VM, reconnect, and install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.
+3. Reboot the VM, reconnect, and install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS isn't required.
Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions.
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf ```
-8. Add the following to `/etc/nvidia/gridd.conf`:
+8. Add two lines to `/etc/nvidia/gridd.conf`:
``` IgnoreSP=FALSE EnableUI=FALSE ```
-9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
+9. Remove one line from `/etc/nvidia/gridd.conf` if it is present:
``` FeatureType=0
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
-If the driver is installed, you will see output similar to the following. Note that **GPU-Util** shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
+If the driver is installed, Nvidia SMI will list the **GPU-Util** as 0% until you run a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
![Screenshot that shows the output when the GPU device state is queried.](./media/n-series-driver-setup/smi-nv.png)
Then, create an entry for your update script in `/etc/rc.d/rc3.d` so the script
* You can set persistence mode using `nvidia-smi` so the output of the command is faster when you need to query cards. To set persistence mode, execute `nvidia-smi -pm 1`. Note that if the VM is restarted, the mode setting goes away. You can always script the mode setting to execute upon startup. * If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, [reinstall the RDMA drivers](#rdma-network-connectivity) to reestablish that connectivity. * During installation of LIS, if a certain CentOS/RHEL OS version (or kernel) is not supported for LIS, an error ΓÇ£Unsupported kernel versionΓÇ¥ is thrown. Please report this error along with the OS and kernel versions.
-* If jobs are interrupted by ECC errors on the GPU (either correctable or uncorrectable), first check to see if the GPU meets any of Nvidia's [RMA criteria for ECC errors](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#faq-pre). If the GPU is eligible for RMA, please contact support about getting it serviced; otherwise, reboot your VM to reattach the GPU as described [here](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#bl_reset_reboot). Note that less invasive methods such as `nvidia-smi -r` do not work with the virtualization solution deployed in Azure.
+* If jobs are interrupted by ECC errors on the GPU (either correctable or uncorrectable), first check to see if the GPU meets any of Nvidia's [RMA criteria for ECC errors](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#faq-pre). If the GPU is eligible for RMA, please contact support about getting it serviced; otherwise, reboot your VM to reattach the GPU as described [here](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#bl_reset_reboot). Less invasive methods such as `nvidia-smi -r` don't work with the virtualization solution deployed in Azure.
## Next steps
virtual-machines Spot Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-template.md
Previously updated : 03/25/2020 Last updated : 04/21/2023
**Applies to:** :heavy_check_mark: Linux VMs
-Using [Azure Spot Virtual Machines](../spot-vms.md) allows you to take advantage of our unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Spot Virtual Machines. Therefore, Azure Spot Virtual Machines are great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute workloads, and more.
+Using [Azure Spot Virtual Machines](../spot-vms.md) allows you to take advantage of our unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Spot VMs. Azure Spot VMs are great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute workloads, and more.
Pricing for Azure Spot Virtual Machines is variable, based on region and SKU. For more information, see VM pricing for [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) and [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/).
-You have option to set a max price you are willing to pay, per hour, for the VM. The max price for an Azure Spot Virtual Machine can be set in US dollars (USD), using up to 5 decimal places. For example, the value `0.98765`would be a max price of $0.98765 USD per hour. If you set the max price to be `-1`, the VM won't be evicted based on price. The price for the VM will be the current price for Azure Spot Virtual Machines or the price for a standard VM, which ever is less, as long as there is capacity and quota available. For more information about setting the max price, see [Azure Spot Virtual Machines - Pricing](../spot-vms.md#pricing).
+You have option to set a max price you're willing to pay, per hour, for the VM. The max price for an Azure Spot VM can be set in US dollars (USD), using up to five decimal places. For example, the value `0.98765`would be a max price of $0.98765 USD per hour. If you set the max price to be `-1`, the VMs eviction is not based on price and it's price will be the current price for Azure Spot VMs or the price for a standard VM, whichever is less, as long as there's capacity and quota available. For more information about setting the max price, see [Azure Spot VMs - Pricing](../spot-vms.md#pricing).
## Use a template
-For Azure Spot Virtual Machine template deployments, use`"apiVersion": "2019-03-01"` or later. Add the `priority`, `evictionPolicy` and `billingProfile` properties to in your template:
+For Azure Spot VM template deployments, use`"apiVersion": "2019-03-01"` or later. Add the `priority`, `evictionPolicy` and `billingProfile` properties to in your template:
```json "priority": "Spot",
For Azure Spot Virtual Machine template deployments, use`"apiVersion": "2019-03-
} ```
-Here is a sample template with the added properties for an Azure Spot Virtual Machine. Replace the resource names with your own and `<password>` with a password for the local administrator account on the VM.
+Here's a sample template with added properties for an Azure Spot VM. Replace the resource names with your own and `<password>` with a password for the local administrator account on the VM.
```json {
Here is a sample template with the added properties for an Azure Spot Virtual Ma
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "18.04-LTS",
+ "sku": "22.04-LTS",
"version": "latest" } },
Here is a sample template with the added properties for an Azure Spot Virtual Ma
## Simulate an eviction
-You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of an Azure Spot Virtual Machine, to testing how well your application will repond to a sudden eviction.
+You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of an Azure Spot VM, to test your application response to a sudden eviction.
-Replace the following with your information:
+Replace the below parameters with your information:
- `subscriptionId` - `resourceGroupName`
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/
## Next steps
-You can also create an Azure Spot Virtual Machine using [Azure PowerShell](../windows/spot-powershell.md) or the [Azure CLI](spot-cli.md).
-
-Query current pricing information using the [Azure retail prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) for information about Azure Spot Virtual Machine pricing. The `meterName` and `skuName` will both contain `Spot`.
-
-If you encounter an error, see [Error codes](../error-codes-spot.md).
+* You can also create an Azure Spot VM using [Azure PowerShell](../windows/spot-powershell.md) or the [Azure CLI](spot-cli.md).
+* For more information about Azure Spot VM current pricing, see [Azure retail prices API](/rest/api/cost-management/retail-prices/azure-retail-prices). Both `meterName` and `skuName` contains `Spot`.
+* To learn more about an error, see [Error codes](../error-codes-spot.md).
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
documentationcenter: Previously updated : 04/02/2019 Last updated : 04/19/2023
Learn how to set up an OpenFrame environment on Azure suitable for development, demos, testing, or production workloads. This tutorial walks you through each step.
-OpenFrame includes multiple components that create the mainframe emulation environment on Azure. For example, OpenFrame online services replace the mainframe middleware such as IBM Customer Information Control System (CICS), and OpenFrame Batch, with its TJES component, replaces the IBM mainframeΓÇÖs Job Entry Subsystem (JES).
+OpenFrame includes multiple components that create the mainframe emulation environment on Azure. For example, OpenFrame online services replace the mainframe middleware such as IBM Customer Information Control System (CICS), and OpenFrame Batch, with its TJES component, replaces the IBM mainframe's Job Entry Subsystem (JES).
-OpenFrame works with any relational database, including Oracle Database, Microsoft SQL Server, IBM Db2, and MySQL. This installation of OpenFrame uses the TmaxSoft Tibero relational database. Both OpenFrame and Tibero run on a Linux operating system. This tutorial installs CentOS 7.3, although you can use other supported Linux distributions.The OpenFrame application server and the Tibero database are installed on one virtual machine (VM).
+OpenFrame works with any relational database, including Oracle Database, Microsoft SQL Server, IBM Db2, and MySQL. This installation of OpenFrame uses the TmaxSoft Tibero relational database. Both OpenFrame and Tibero run on a Linux operating system. This tutorial installs CentOS 7.3, although you can use other supported Linux distributions. The OpenFrame application server and the Tibero database are installed on one virtual machine (VM).
The tutorial steps you through the installation of the OpenFrame suite components. Some must be installed separately.
Main OpenFrame components:
- Tibero database. - Open Database Connectivity (ODBC) is used by applications in OpenFrame to communicate with the Tibero database. - OpenFrame Base, the middleware that manages the entire system.-- OpenFrame Batch, the solution that replaces the mainframeΓÇÖs batch systems.
+- OpenFrame Batch, the solution that replaces the mainframe's batch systems.
- TACF, a service module that controls user access to systems and resources. - ProSort, a sort tool for batch transactions.-- OFCOBOL, a compiler that interprets the mainframeΓÇÖs COBOL programs.-- OFASM, a compiler that interprets the mainframeΓÇÖs assembler programs.-- OpenFrame Server Type C (OSC ), the solution that replaces the mainframeΓÇÖs middleware and IBM CICS.-- Java Enterprise User Solution (JEUS ), a web application server that is certified for Java Enterprise Edition 6.
+- OFCOBOL, a compiler that interprets the mainframe's COBOL programs.
+- OFASM, a compiler that interprets the mainframe's assembler programs.
+- OpenFrame Server Type C (OSC), the solution that replaces the mainframe's middleware and IBM CICS.
+- Java Enterprise User Solution (JEUS), a web application server that is certified for Java Enterprise Edition 6.
- OFGW, the OpenFrame gateway component that provides a 3270 listener.-- OFManager, a solution that provides OpenFrameΓÇÖs operation and management functions in the web environment.
+- OFManager, a solution that provides OpenFrame's operation and management functions in the web environment.
Other required OpenFrame components: - OSI, the solution that replaces the mainframe middleware and IMS DC.-- TJES, the solution that provides the mainframeΓÇÖs JES environment.
+- TJES, the solution that provides the mainframe's JES environment.
- OFTSAM, the solution that enables (V)SAM files to be used in the open system.-- OFHiDB, the solution that replaces the mainframeΓÇÖs IMS DB.-- OFPLI, a compiler that interprets the mainframeΓÇÖs PL/I programs.
+- OFHiDB, the solution that replaces the mainframe's IMS DB.
+- OFPLI, a compiler that interprets the mainframe's PL/I programs.
- PROTRIEVE, a solution that executes the mainframe language CA-Easytrieve. - OFMiner, a solution that analyzes the mainframes assets and then migrates them to Azure.
Plan on spending a few days to assemble all the required software and complete a
Before getting started, do the following: - Get the OpenFrame installation media from TmaxSoft. If you are an existing TmaxSoft customer, contact your TmaxSoft representative for a licensed copy. Otherwise, request a trial version from [TmaxSoft](https://www.tmaxsoft.com/contact/).- - Request the OpenFrame documentation by sending email to <support@tmaxsoft.com>.- - Get an Azure subscription if you don't already have one. You can also create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.- - Optional. Set up a site-to-site VPN tunnel or a jumpbox that restricts access to the Azure VM to the permitted users in your organization. This step is not required, but it is a best practice. ## Set up a VM on Azure for OpenFrame and Tibero
You can set up the OpenFrame environment using various deployment patterns, but
**To create a VM** 1. Sign in to the [Azure portal](https://portal.azure.com).- 2. Click **Virtual machines**. ![Resource list in Azure portal](media/vm-01.png)
You can set up the OpenFrame environment using various deployment patterns, but
![Operating System options in Azure portal](media/vm-04.png) 6. In the **Basics** settings, enter **Name**, **User name**, **Authentication type**, **Subscription** (Pay-As-You-Go is the AWS style of payment), and **Resource group** (use an existing one or create a TmaxSoft group).- 7. When complete (including the public/private key pair for **Authentication type**), click **Submit**. > [!NOTE]
You can set up the OpenFrame environment using various deployment patterns, but
### Generate a public/private key pair
-The public key can be freely shared, but the private key should be kept entirely secret and should never be shared with another party. After generating the keys, you must paste the **SSH public key** into the configurationΓÇöin effect, uploading it to the Linux VM. It is stored inside authorized\_keys within the \~/.ssh directory of the user accountΓÇÖs home directory. The Linux VM is then
+The public key can be freely shared, but the private key should be kept entirely secret and should never be shared with another party. After generating the keys, you must paste the **SSH public key** into the configuration in effect, uploading it to the Linux VM. It is stored inside `authorized_keys` within the `~/.ssh` directory of the user account's home directory. The Linux VM is then
able to recognize and validate the connection once you provide the associated **SSH private key** in the SSH client.
-When giving new individuals access the VM:
+When giving new individuals access the VM:
- Each new individual generates their own public/private keys. - Individuals store their own private keys separately and send the public key information to the administrator of the VM.-- The administrator pastes the contents of the public key to the \~/.ssh/authorized\_keys file.
+- The administrator pastes the contents of the public key to the `~/.ssh/authorized_keys` file.
- The new individual connects via OpenSSH. For more information about creating SSH key pairs, see [Create and use an SSH public-private key pair for Linux VMs in Azure](../../../linux/mac-create-ssh-keys.md). - ### Configure VM features 1. In Azure portal, in the **Choose a size** blade, choose the Linux machine hardware settings you want. The *minimum* requirements for installing both Tibero and OpenFrame are 2 CPUs and 4 GB RAM as shown in this example installation:
For more information about creating SSH key pairs, see [Create and use an SSH pu
![Create virtual machine - Purchase](media/create-vm-02.png) 4. Submit your selections. Azure begins to deploy the VM. This process typically takes a few minutes.- 5. When the VM is deployed, its dashboard is displayed, showing all the settings that were selected during the configuration. Make a note of the **Public IP address**. ![tmax on Azure dashboard](media/create-vm-03.png) 6. Open bash or a PowerShell prompt.- 7. For **Host Name**, type your username and the public IP address you copied. For example, **username\@publicip**. ![Screenshot that shows the PuTTY Configuration dialog box and highlights the Host Name (or IP address) field.](media/putty-01.png)
For more information about creating SSH key pairs, see [Create and use an SSH pu
![PuTTY Configuration dialog box](media/putty-02.png) 9. Click **Open** to launch the PuTTY window. If successful, you are connected to your new CentOS VM running on Azure.- 10. To log on as root user, type **sudo bash**. ![Root user logon in command window](media/putty-03.png)
For more information about creating SSH key pairs, see [Create and use an SSH pu
Now that the VM is created and you are logged on, you must perform a few setup steps and install the required preinstallation packages.
-1. Map the name **ofdemo** to the local IP address by using vi to edit the hosts file (`vi /etc/hosts`). Assuming our IP is 192.168.96.148 ofdemo, this is before the change:
+1. Map the name **ofdemo** to the local IP address, modify `/etc/hosts` using any text editor. Assuming our IP is `192.168.96.148`, this is before the change:
- ```vi
+ ```config
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain <IP Address> <your hostname> ```
- This is after the change:
+ - This is after the change:
- ```vi
+ ```config
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain 192.168.96.148 ofdemo
Now that the VM is created and you are logged on, you must perform a few setup s
2. Create groups and users:
- ```vi
- [root@ofdemo ~]# adduser -d /home/oframe7 oframe7
- [root@ofdemo ~]# passwd oframe7
+ ```bash
+ sudo adduser -d /home/oframe7 oframe7
``` 3. Change the password for user oframe7:
- ```vi
+ ```bash
+ sudo passwd oframe7
+ ```
+
+ ```output
New password: Retype new password: passwd: all authentication tokens updated successfully. ```
-4. Update the kernel parameters in /etc/sysctl.conf:
+4. Update the kernel parameters in `/etc/sysctl.conf` using any text editor:
- ```vi
- [root@ofdemo ~]# vi /etc/sysctl.conf
+ ```text
kernel.shmall = 7294967296 kernel.sem = 10000 32000 10000 10000 ``` 5. Refresh the kernel parameters dynamically without reboot:
- ```vi
- [root@ofdemo ~]# /sbin/sysctl -p
+ ```bash
+ sudo /sbin/sysctl -p
``` 6. Get the required packages: Make sure the server is connected to the Internet, download the following packages, and then install them:
Now that the VM is created and you are logged on, you must perform a few setup s
> [!NOTE] > After installing the ncurses package, create the following symbolic links:
- ```
- ln -s /usr/lib64/libncurses.so.5.9 /usr/lib/libtermcap.so
- ln -s /usr/lib64/libncurses.so.5.9 /usr/lib/libtermcap.so.2
+
+ ```bash
+ sudo ln -s /usr/lib64/libncurses.so.5.9 /usr/lib/libtermcap.so
+ sudo ln -s /usr/lib64/libncurses.so.5.9 /usr/lib/libtermcap.so.2
``` - gcc
Now that the VM is created and you are logged on, you must perform a few setup s
7. In case of Java RPM installation, do the following:
+```bash
+sudo rpm -ivh jdk-7u79-linux-x64.rpm
```
-root@ofdemo ~]# rpm -ivh jdk-7u79-linux-x64.rpm
-[root@ofdemo ~]# vi .bash_profile
+- Add the following contents to the `~./.bash_profile` using any text editor:
+
+```text
# JAVA ENV export JAVA_HOME=/usr/java/jdk1.7.0_79/ export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=$CLASSPATH:$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
+```
+
+- Execute the following command to load the profile:
+
+```bash
+sudo source /etc/profile
+```
+
+- Validate the java version using the following command:
-[root@ofdemo ~]# source /etc/profile
-[root@ofdemo ~]# java ΓÇôversion
+```bash
+sudo java ΓÇôversion
+```
+```output
java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
+```
-[root@ofdemo ~]# echo $JAVA_HOME /usr/java/jdk1.7.0_79/
+```bash
+sudo echo $JAVA_HOME /usr/java/jdk1.7.0_79/
``` ## Install the Tibero database
Tibero provides the several key functions in the OpenFrame environment on Azure:
1. Verify that the Tibero binary installer file is present and review the version number. 2. Copy the Tibero software to the Tibero user account (oframe). For example:
- ```
- [oframe7@ofdemo ~]$ tar -xzvf tibero6-bin-6_rel_FS04-linux64-121793-opt-tested.tar.gz
- [oframe7@ofdemo ~]$ mv license.xml /opt/tmaxdb/tibero6/license/
+ ```bash
+ tar -xzvf tibero6-bin-6_rel_FS04-linux64-121793-opt-tested.tar.gz
+ mv license.xml /opt/tmaxdb/tibero6/license/
```
-3. Open .bash\_profile in vi (`vi .bash_profile`) and paste the following in it:
+3. Open `.bash_profile` using any text editor and paste the following in it:
- ```
+ ```text
# Tibero6 ENV export TB_HOME=/opt/tmaxdb/tibero6 export TB_SID=TVSAM export TB_PROF_DIR=$TB_HOME/bin/prof
Tibero provides the several key functions in the OpenFrame environment on Azure:
4. To execute the bash profile, at the command prompt type:
- ```
+ ```bash
source .bash_profile ```
-5. Generate the tip file (a configuration file for Tibero), then open it in vi. For example:
+5. Generate the tip file (a configuration file for Tibero), and check its contents. For example:
- ```
- [oframe7@ofdemo ~]$ sh $TB_HOME/config/gen_tip.sh
- [oframe7@ofdemo ~]$ vi $TB_HOME/config/$TB_SID.tip
+ ```bash
+ sh $TB_HOME/config/gen_tip.sh
+ cat $TB_HOME/config/$TB_SID.tip
```
-6. Modify \$TB\_HOME/client/config/tbdsn.tbr and put 127.0.0.1 instead oflocalhost as shown:
+6. Modify `\$TB_HOME/client/config/tbdsn.tbr` using any text editor and put 127.0.0.1 instead of localhost as shown:
- ```
+ ```text
TVSAM=( (INSTANCE=(HOST=127.0.0.1) (PT=8629)
Tibero provides the several key functions in the OpenFrame environment on Azure:
7. Create the database. The following output appears:
- ```
+ ```output
Change core dump dir to /opt/tmaxdb/tibero6/bin/prof. Listener port = 8629 Tibero 6
Tibero provides the several key functions in the OpenFrame environment on Azure:
8. To recycle Tibero, first shut it down using the `tbdown` command. For example:
+ ```bash
+ tbdown
```
- [oframe7@ofdemo ~]$$ tbdown
+
+ ```output
Tibero instance terminated (NORMAL mode). ``` 9. Now boot Tibero using `tbboot`. For example:
+ ```bash
+ tbboot
```
- [oframe7@ofdemo ~]$ tbboot
+
+ ```output
Change core dump dir to /opt/tmaxdb/tibero6/bin/prof. Listener port = 8629 Tibero 6
Tibero provides the several key functions in the OpenFrame environment on Azure:
10. To create a tablespace, access the database using SYS user (sys/tmax), then create the necessary tablespace for the default volume and TACF:
+ ```bash
+ tbsql tibero/tmax
```
- [oframe7@ofdemo ~]$ tbsql tibero/tmax
+
+ ```output
tbSQL 6 TmaxData Corporation Copyright (c) 2008-. All rights reserved. Connected to Tibero.
Tibero provides the several key functions in the OpenFrame environment on Azure:
11. Now type the following SQL commands:
- ```
+ ```sql
SQL> create tablespace "DEFVOL" datafile 'DEFVOL.dbf' size 500M autoextend on; create tablespace "TACF00" datafile 'TACF00.dbf' size 500M autoextend on; create tablespace "OFM_REPOSITORY" datafile 'ofm_repository.dbf' size 300M autoextend on; SQL> Tablespace 'DEFVOL' created. SQL> Tablespace 'TACF00' created.
Tibero provides the several key functions in the OpenFrame environment on Azure:
12. Boot Tibero and verify that the Tibero processes are running:
- ```
- [oframe7@ofdemo ~]$ tbboot
+ ```bash
+ tbboot
ps -ef | egrep tbsvr ```
API provided by the open-source unixODBC project.
To install ODBC:
-1. Verify that the unixODBC-2.3.4.tar.gz installer file is present, or use the `wget unixODBC-2.3.4.tar.gz` command. For example:
+1. Verify that the `unixODBC-2.3.4.tar.gz` installer file is present, or use the `wget unixODBC-2.3.4.tar.gz` command. For example:
- ```
- [oframe7@ofdemo ~]$ wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.4.tar.gz
+ ```bash
+ wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.4.tar.gz
``` 2. Unzip the binary. For example:
- ```
- [oframe7@ofdemo ~]$ tar -zxvf unixODBC-2.3.4.tar.gz
+ ```bash
+ tar -zxvf unixODBC-2.3.4.tar.gz
``` 3. Navigate to unixODBC-2.3.4 directory and generate the Makefile by using the checking machine information. For example:
- ```
- [oframe7@ofdemo unixODBC-2.3.4]$ ./configure --prefix=/opt/tmaxapp/unixODBC/ --sysconfdir=/opt/tmaxapp/unixODBC/etc
+ ```bash
+ ./configure --prefix=/opt/tmaxapp/unixODBC/ --sysconfdir=/opt/tmaxapp/unixODBC/etc
```
- By default, unixODBC is installed in /usr /local, so `--prefix` passes a value to change the location. Similarly, configuration files are installed in /etc by default, so `--sysconfdir` passes the value of the desired location.
-
-4. Execute Makefile: `[oframe7@ofdemo unixODBC-2.3.4]$ make`
-
+ By default, unixODBC is installed in /usr /local, so `--prefix` passes a value to change the location. Similarly, configuration files are installed in `/etc` by default, so `--sysconfdir` passes the value of the desired location.
+4. Execute Makefile: `make`
5. Copy the executable file in the program directory after compiling. For example:
- ```
- [oframe7@ofdemo unixODBC-2.3.4]$ make install
+ ```bash
+ make install
```
-6. Use vi to edit the bash profile (`vi ~/.bash_profile`) and add the following:
+6. Edit the bash profile `~/.bash_profile` using any text editor and add the following:
- ```
+ ```text
# UNIX ODBC ENV export ODBC_HOME=$HOME/unixODBC export PATH=$ODBC_HOME/bin:$PATH
To install ODBC:
7. Apply the ODBC. Edit the following files accordingly. For example:
+ ```bash
+ source ~/.bash_profile
+ cd
+ odbcinst -j unixODBC 2.3.4
```
- [oframe7@ofdemo unixODBC-2.3.4]$ source ~/.bash_profile
- [oframe7@ofdemo ~]$ cd
-
- [oframe7@ofdemo ~]$ odbcinst -j unixODBC 2.3.4
+ ```output
DRIVERS............: /home/oframe7/odbcinst.ini SYSTEM DATA SOURCES: /home/oframe7/odbc.ini FILE DATA SOURCES..: /home/oframe7/ODBCDataSources
To install ODBC:
SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8
+ ```
- [oframe7@ofdemo ~]$ vi odbcinst.ini
+ - Modify `odbcinst.ini` using any text editor, and add the following contents:
+ ```config
[Tibero] Description = Tibero ODBC driver for Tibero6 Driver = /opt/tmaxdb/tibero6/client/lib/libtbodbc.so
To install ODBC:
ForceTrace = Yes Pooling = No DEBUG = 1
+ ```
- [oframe7@ofdemo ~]$ vi odbc.ini
+ - Modify `odbc.ini` using any text editor, and add the following contents:
+ ```config
[TVSAM] Description = Tibero ODBC driver for Tibero6 Driver = Tibero
To install ODBC:
8. Create a symbolic link and validate the Tibero database connection:
- ```
- [oframe7@ofdemo ~]$ ln $ODBC_HOME/lib/libodbc.so $ODBC_HOME/lib/libodbc.so.1 [oframe7@ofdemo ~]$ ln $ODBC_HOME/lib/libodbcinst.so
- $ODBC_HOME/lib/libodbcinst.so.1
-
- [oframe7@ofdemo lib]$ isql TVSAM tibero tmax
+ ```bash
+ ln $ODBC_HOME/lib/libodbc.so $ODBC_HOME/lib/libodbc.so.1
+ ln $ODBC_HOME/lib/libodbcinst.so $ODBC_HOME/lib/libodbcinst.so.1
+ isql TVSAM tibero tmax
``` The following output is displayed:
The Base application server is installed before the individual services that Ope
**To install OpenFrame Base** 1. Make sure the Tibero installation succeeded, then verify that the following OpenFrame\_Base7\_0\_Linux\_x86\_64.bin installer file and base.properties configuration file are present.- 2. Update the bash profile with the following Tibero-specific information: ```bash
The Base application server is installed before the individual services that Ope
alias defvol='cd $OPENFRAME_HOME/volume_default' ```
-3. Execute the bash profile:`[oframe7@ofdemo ~]$ . .bash_profile`
+3. Execute the bash profile:`. .bash_profile`
4. Ensure that the Tibero processes are running. For example:
- ```linux
- [oframe7@ofdemo ~]$ ps -ef|grep tbsvr
+ ```bash
+ ps -ef|grep tbsvr
``` ![Base](media/base-01.png)
The Base application server is installed before the individual services that Ope
5. Generate license at [technet.tmaxsoft.com](https://technet.tmaxsoft.com/en/front/main/main.do) and PUT the OpenFrame Base, Batch, TACF, OSC licenses in the appropriate folder:
- ```
- [oframe7@ofdemo ~]$ cp license.dat /opt/tmaxapp/OpenFrame/core/license/
- [oframe7@ofdemo ~]$ cp lictjes.dat lictacf.dat licosc.dat $OPENFRAME_HOME/license/
+ ```bash
+ cp license.dat /opt/tmaxapp/OpenFrame/core/license/
+ cp lictjes.dat lictacf.dat licosc.dat $OPENFRAME_HOME/license/
```
-6. Download the OpenFrame Base binary and base.properties files:
+6. Download the OpenFrame Base binary and `base.properties` files:
- ```
- [oframe7@ofdemo ~]$ vi base.properties
+ - Modify the `base.properties` file accordingly, using any text editor:
+
+ ```config
OPENFRAME_HOME= <appropriate location for installation> ex. /opt/tmaxapp/OpenFrame TP_HOST_NAME=<your IP Hostname> ex. ofdemo TP_HOST_IP=<your IP Address> ex. 192.168.96.148 TP_SHMKEY=63481
The Base application server is installed before the individual services that Ope
OPENFRAME_LICENSE_PATH=/opt/tmaxapp/license/OPENFRAME TMAX_LICENSE_PATH=/opt/tmaxapp/license/TMAX ```
-7. Execute the installer using the base.properties file. For example:
+7. Execute the installer using the `base.properties file`. For example:
- ```
- [oframe7@ofdemo ~]$ chmod a+x OpenFrame_Base7_0_Linux_x86_64.bin
- [oframe7@ofdemo ~]$ ./OpenFrame_Base7_0_Linux_x86_64.bin -f base.properties
+ ```bash
+ chmod a+x OpenFrame_Base7_0_Linux_x86_64.bin
+ ./OpenFrame_Base7_0_Linux_x86_64.bin -f base.properties
```
- When complete, the Installation Complete message is diplayed.
+ When complete, the Installation Complete message is displayed.
8. Verify the OpenFrame Base directory structure using the `ls -ltr` command. For example:
+ ```bash
+ ls -ltr
```
- [oframe7@ofdemo OpenFrame]$ ls -ltr
+
+ ```output
total 44 drwxrwxr-x. 4 oframe7 oframe7 61 Nov 30 16:57 UninstallerData
The Base application server is installed before the individual services that Ope
9. Start OpenFrame Base:
+ ```bash
+ cp /usr/lib/libtermcap.so.2 $TMAXDIR/lib
```
- [oframe7@ofdemo ~]$ cp /usr/lib/libtermcap.so.2 $TMAXDIR/lib
- Startup Tmax Server
- [oframe7@ofdemo ~]$ tmboot
+
+ Start up Tmax Server using the following command:
+
+ ```bash
+ tmboot
``` ![tmboot command output](media/base-02.png)
The Base application server is installed before the individual services that Ope
11. Shut down OpenFrame Base:
+ ```bash
+ tmdown
```
- [oframe7@ofdemo ~]$ tmdown
+
+ ```output
Do you really want to down whole Tmax? (y : n): y TMDOWN for node(NODE1) is starting:
OpenFrame Batch consists of several components that simulate mainframe batch env
**To install Batch**
-1. Make sure the base installation succeeded, then verify that the OpenFrame\_Batch7\_0\_Fix2\_MVS\_Linux\_x86\_64.bin installer file and batch.properties configuration file are present:
-
-2. At the command prompt, type `vi batch.properties` to edit the batch.properties file using vi.
+1. Make sure the base installation succeeded, then verify that the `OpenFrame_Batch7_0_Fix2_MVS_Linux_x86_64.bin` installer file and `batch.properties` configuration file are present:
+2. Modify the file `vi batch.properties` using any text editor.
3. Modify the parameters as follows:
- ```
+ ```config
OPENFRAME_HOME = /opt/tmaxapp/OpenFrame DEFAULT_VOLSER=DEFVOL TP_NODE_NAME=NODE1
OpenFrame Batch consists of several components that simulate mainframe batch env
4. To execute the batch installer, at the command prompt type:
- ```
+ ```bash
./OpenFrame_Batch7_0_Fix2_MVS_Linux_x86_64.bin -f batch.properties ```
OpenFrame Batch consists of several components that simulate mainframe batch env
7. Execute the following commands:
- ```
+ ```bash
$$2 NODE1 (tmadm): quit ADM quit for node (NODE1) ``` 8. Use the `tmdown` command to start up and shut down Batch:
+ ```bash
+ tmdown
```
- [oframe7@ofdemo ~]$tmdown
+
+ ```output
Do you really want to down whole Tmax? (y : n): y TMDOWN for node(NODE1) is starting:
TACF Manager is an OpenFrame service module that controls user access to systems
**To install TACF**
-1. Verify that the OpenFrame\_Tacf7\_0\_Fix2\_Linux\_x86\_64.bin installer file and tacf.properties configuration file are present.
-2. Make sure the Batch installation succeeded, then use vi to open the tacf.properties file (`vi tacf.properties`).
+1. Verify that the `OpenFrame_Tacf7_0_Fix2_Linux_x86_64.bin` installer file and tacf.properties configuration file are present.
+2. Make sure the Batch installation succeeded, then modify the file `tacf.properties` using any text editor.
3. Modify the TACF parameters:
- ```
+ ```config
OPENFRAME_HOME=/opt/tmaxapp/OpenFrame USE_OS_AUTH=NO TACF_USERNAME=tibero
TACF Manager is an OpenFrame service module that controls user access to systems
4. After completing TACF installer, apply the TACF environment variables. At the command prompt, type:
- ```
- source \~/.bash\_profile
+ ```bash
+ source ~/.bash_profile
``` 5. Execute the TACF installer. At the command prompt, type:
- ```
+ ```bash
./OpenFrame_Tacf7_0_Fix2_Linux_x86_64.bin -f tacf.properties ``` The output looks something like this:
- ```
+ ```output
Wed Dec 07 17:36:42 EDT 2016 Free Memory: 18703 kB Total Memory: 28800 kB
TACF Manager is an OpenFrame service module that controls user access to systems
6. At the command prompt, type `tmboot` to restart OpenFrame. The output looks something like this:
- ```
+ ```output
TMBOOT for node(NODE1) is starting: Welcome to Tmax demo system: it will expire 2016/11/4 Today: 2016/9/7
TACF Manager is an OpenFrame service module that controls user access to systems
7. Verify that the process status is ready using `tmadmin` in the `si` command. For example:
- ```
- [oframe7\@ofdemo \~]\$ tmadmin
+ ```bash
+ tmadmin
``` In the **status** column, RDY appears: ![RDY in the status column](media/tmboot-02.png)
-8. Execute the following commands:
+8. Execute the following commands in the bash terminal:
- ```
+ ```bash
$$2 NODE1 (tmadm): quit
+ ```
+
+ ```output
DM quit for node (NODE1)
- [oframe7@ofdemo ~]$ tacfmgr
+ ```bash
+ tacfmgr
+
+ ```output
Input USERNAME : ROOT Input PASSWORD : SYS1 TACFMGR: TACF MANAGER START!!! QUIT TACFMGR: TACF MANAGER END!!!
+ ```
- [oframe7@ofdemo ~]$ tmdow
+ ```bash
+ tmdow
``` 9. Shut the server down using the `tmdown` command. The output looks something like this:
+ ```bash
+ tmdown
```
- [oframe7@ofdemo ~]$ tmdown
+
+ ```output
Do you really want to down whole Tmax? (y : n): y TMDOWN for node(NODE1) is starting:
ProSort is a utility used in batch transactions for sorting data.
**To install ProSort**
-1. Make sure the Batch installation was successful, and then verify that the **prosort-bin-prosort\_2sp3-linux64-2123-opt.tar.gz** installer file is present.
-
+1. Make sure the Batch installation was successful, and then verify that the `prosort-bin-prosort_2sp3-linux64-2123-opt.tar.gz` installer file is present.
2. Execute the installer using the properties file. At the command prompt, type:
- ```
- tar -zxvf prosort-bin-prosort\_2sp3-linux64-2123-opt.tar.gz
+ ```bash
+ tar -zxvf prosort-bin-prosort_2sp3-linux64-2123-opt.tar.gz
``` 3. Move the prosort directory to the home location. At the command prompt, type:
- ```
+ ```bash
mv prosort /opt/tmaxapp/prosort ``` 4. Create a license subdirectory and copy the license file there. For example:
- ```
+ ```bash
cd /opt/tmaxapp/prosort mkdir license cp /opt/tmaxsw/oflicense/prosort/license.xml /opt/tmaxapp/prosort/license ```
-5. Open bash.profile in vi (`vi .bash_profile`) and update it as follows:
+5. Modify `bash.profile` using any text editor, update it as follows:
- ```bash
+ ```text
# PROSORT PROSORT_HOME=/opt/tmaxapp/prosort
ProSort is a utility used in batch transactions for sorting data.
7. Create the configuration file. For example:
+ ```bash
+ cd /opt/tmaxapp/prosort/config
+ ./gen_tip.sh
```
- oframe@oframe7: cd /opt/tmaxapp/prosort/config
- oframe@oframe7: ./gen_tip.sh
+
+ ```output
Using PROSORT_SID "gbg" /home/oframe7/prosort/config/gbg.tip generated ``` 8. Create the symbolic link. For example:
- ```
- oframe@oframe7: cd /opt/tmaxapp/OpenFrame/util/
- oframe@oframe7home/oframe7/OpenFrame/util : ln -s DFSORT SORT
+ ```bash
+ cd /opt/tmaxapp/OpenFrame/util/
+ ln -s DFSORT SORT
``` 9. Verify the ProSort installation by executing the `prosort -h` command. For example:
+ ```bash
+ prosort -h
```
- oframe@oframe7: prosort -h
+ ```output
Usage: prosort [options] [sort script files] options -h Display this information
ProSort is a utility used in batch transactions for sorting data.
## Install OFCOBOL
-OFCOBOL is the OpenFrame compiler that interprets the mainframeΓÇÖs COBOL programs.
+OFCOBOL is the OpenFrame compiler that interprets the mainframe's COBOL programs.
**To install OFCOBOL**
-1. Make sure that the Batch/Online installation succeeded, then verify that the OpenFrame\_COBOL3\_0\_40\_Linux\_x86\_64.bin installer file is present.
-
+1. Make sure that the Batch/Online installation succeeded, then verify that the `OpenFrame_COBOL3_0_40_Linux_x86_64.bin` installer file is present.
2. To execute the OFCOBOL installer, at the command prompt, type:
- ```
- ./OpenFrame\_COBOL3\_0\_40\_Linux\_x86\_64.bin
+ ```bash
+ ./OpenFrame_COBOL3_0_40_Linux_x86_64.bin
``` 3. Read the licensing agreement and press Enter to continue.- 4. Accept the licensing agreement. When the installation is complete, the following appears:
- ```
+ ```output
Choose Install Folder -- Where would you like to install?
OFCOBOL is the OpenFrame compiler that interprets the mainframeΓÇÖs COBOL progra
PRESS <ENTER> TO EXIT THE INSTALLER ```
-5. Open the bash profile in vi (`vi .bash_profile`) and verify that is updated with OFCOBOL variables.
+5. Modify the bash profile file (`bash_profile`) using any text editor, and verify that is updated with OFCOBOL variables.
6. Execute the bash profile. At the command prompt, type:
- ```
+ ```bash
source ~/.bash_profile ``` 7. Copy the OFCOBOL license to the installed folder. For example:
- ```
+
+ ```bash
mv licofcob.dat $OFCOB_HOME/license ```
-8. Go to the OpenFrame tjclrun.conf configuration file and open it in vi. For example:
- ```
- [oframe7@ofdemo ~]$ cd $OPENFRAME_HOME/config
- [oframe7@ofdemo ~]$ vi tjclrun.conf
- ```
- Here's the SYSLIB section before the change:
- ```
+8. Modify the OpenFrame `$OPENFRAME_HOME/config/tjclrun.conf` configuration file using any text editor. For example:
+
+ - Here's the SYSLIB section before the change:
+
+ ```config
[SYSLIB] BIN_PATH=${OPENFRAME_HOME}/bin:${OPENFRAME_HOME}/util:${COBDIR}/bin:/usr/local/bin:/bin LIB_PATH=${OPENFRAME_HOME}/lib:${OPENFRAME_HOME}/core/lib:${TB_HOME}/client/lib:${COBDIR}/lib:/ usr/lib:/lib:/lib/i686:/usr/local/lib:${PROSORT_HOME}/lib:/opt/FSUNbsort/lib ```
- Here's the SYSLIB section after the change:
- ```
+
+ - Here's the SYSLIB section after the change:
+
+ ```config
[SYSLIB] BIN_PATH=${OPENFRAME_HOME}/bin:${OPENFRAME_HOME}/util:${COBDIR}/bin:/usr/local/bin:/bin LIB_PATH=${OPENFRAME_HOME}/lib:${OPENFRAME_HOME}/core/lib:${TB_HOME}/client/lib:${COBDIR}/lib:/ usr/lib:/lib:/lib/i686:/usr/local/lib:${PROSORT_HOME}/lib:/opt/FSUNbsort/lib :${ODBC_HOME}/lib :${OFCOB_HOME}/lib ```
-9. Review the OpenFrame\_COBOL\_InstallLog.log file in vi and verify that there are no errors. For example:
+
+9. Review the `OpenFrame_COBOL_InstallLog.log` file in vi and verify that there are no errors. For example:
+
+ ```bash
+ cat $OFCOB_HOME/UninstallerData/log/OpenFrame_COBOL_InstallLog.log
```
- [oframe7@ofdemo ~]$ vi $OFCOB_HOME/UninstallerData/log/OpenFrame_COBOL_InstallLog.log
+
+ ```output
…….. Summary
OFCOBOL is the OpenFrame compiler that interprets the mainframeΓÇÖs COBOL progra
0 NonFatalErrors 0 FatalError ```+ 10. Use the `ofcob --version` command and review the version number to verify the installation. For example:
+ ```bash
+ ofcob --version
```
- [oframe7@ofdemo ~]$ ofcob --version
+
+ ```output
OpenFrame COBOL Compiler 3.0.54 CommitTag:: 645f3f6bf7fbe1c366a6557c55b96c48454f4bf ```
OFCOBOL is the OpenFrame compiler that interprets the mainframeΓÇÖs COBOL progra
## Install OFASM
-OFASM is the OpenFrame compiler that interprets the mainframeΓÇÖs assembler programs.
+OFASM is the OpenFrame compiler that interprets the mainframe's assembler programs.
**To install OFASM** 1. Make sure that the Batch/Online installation succeeded, then verify that the
- OpenFrame\_ASM3\_0\_Linux\_x86\_64.bin installer file is present.
-
+ `OpenFrame_ASM3_0_Linux_x86_64.bin` installer file is present.
2. Execute the installer. For example:
- ```
- [oframe7@ofdemo ~]$ ./OpenFrame_ASM3_0_Linux_x86_64.bin
+ ```bash
+ ./OpenFrame_ASM3_0_Linux_x86_64.bin
``` 3. Read the licensing agreement and press Enter to continue. 4. Accept the licensing agreement. 5. Verify the bash profile is updated with OFASM variables. For example:
+ ```bash
+ source .bash_profile
+ ofasm --version
```
- [oframe7@ofdemo ~]$ source .bash_profile
- [oframe7@ofdemo ~]$ ofasm --version
+
+ ```output
# TmaxSoft OpenFrameAssembler v3 r328 (3ff35168d34f6e2046b96415bbe374160fcb3a34)
+ ```
- [oframe7@ofdemo OFASM]$ vi .bash_profile
+ ```bash
+ cat .bash_profile
+ ```
+ ```output
# OFASM ENV export OFASM_HOME=/opt/tmaxapp/OFASM export OFASM_MACLIB=$OFASM_HOME/maclib/free_macro
OFASM is the OpenFrame compiler that interprets the mainframeΓÇÖs assembler prog
export LD_LIBRARY_PATH="./:$OFASM_HOME/lib:$LD_LIBRARY_PATH" ```
-6. Open the OpenFrame tjclrun.conf configuration file in vi and edit it as follows:
+6. Open the OpenFrame `$OPENFRAME_HOME/config/tjclrun.conf` configuration file using any text editor and modify it as follows:
- ```
- [oframe7@ofdemo ~]$ cd $OPENFRAME_HOME/config
- [oframe7@ofdemo ~]$ vi tjclrun.conf
- ```
-
- Here is the [SYSLIB] section *before* the change:
+ - Here is the [SYSLIB] section *before* the change:
- ```
+ ```config
[SYSLIB] BIN_PATH=${OPENFRAME_HOME}/bin:${OPENFRAME_HOME}/util:${COBDIR}/bin:/usr/local/bin:/bi n:${OPENFRAME_HOME}/volume_default/SYS1.LOADLIB LIB_PATH=${OPENFRAME_HOME}/lib:${OPENFRAME_HOME}/core/lib:${TB_HOME}/client/lib:${CO BDIR}/lib:/usr/lib:/lib:/lib/i686:/usr/local/lib:${PROSORT_HOME}/lib:/opt/FSUNbsort/lib:${OFCOB_HOM E}/lib:${ODBC_HOME}/lib:${OFPLI_HOME}/lib ```
- Here is the [SYSLIB] section *after* the change:
+ - Here is the [SYSLIB] section *after* the change:
- ```
+ ```config
[SYSLIB] BIN_PATH=${OPENFRAME_HOME}/bin:${OPENFRAME_HOME}/util:${COBDIR}/bin:/usr/local/bin:/bi n:${OPENFRAME_HOME}/volume_default/SYS1.LOADLIB LIB_PATH=${OPENFRAME_HOME}/lib:${OPENFRAME_HOME}/core/lib:${TB_HOME}/client/lib:${CO BDIR}/lib:/usr/lib:/lib:/lib/i686:/usr/local/lib:${PROSORT_HOME}/lib:/opt/FSUNbsort/lib:${OFCOB_HOM E}/lib:${ODBC_HOME}/lib:${OFPLI_HOME}/lib:${OFASM_HOME}/lib ```
-7. Open the OpenFrame\_ASM\_InstallLog.log file in vi and verify that there are no errors. For example:
+7. Validate the `OpenFrame_ASM_InstallLog.log` file, and verify that there are no errors. For example:
+ ```bash
+ cat $OFASM_HOME/UninstallerData/log/OpenFrame_ASM_InstallLog.log
```
- [oframe7@ofdemo ~]$ vi
- $OFASM_HOME/UninstallerData/log/OpenFrame_ASM_InstallLog.log
+
+ ```output
…….. Summary
OFASM is the OpenFrame compiler that interprets the mainframeΓÇÖs assembler prog
8. Reboot OpenFrame by issuing one of the following commands:
- ```
+ ```bash
tmdown / tmboot ``` ΓÇöorΓÇö
- ```
+ ```bash
oscdown / oscboot ```
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
**To install OSC**
-1. Make sure the base installation succeeded, then verify that the OpenFrame\_OSC7\_0\_Fix2\_Linux\_x86\_64.bin installer file and osc.properties configuration file are present.
-2. Edit the following parameters in the osc.properties file:
- ```
+1. Make sure the base installation succeeded, then verify that the `OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin` installer file and osc.properties configuration file are present.
+2. Edit the following parameters in the `osc.properties` file:
+
+ ```text
OPENFRAME_HOME=/opt/tmaxapp/OpenFrame OSC_SYS_OSC_NCS_PATH=/opt/tmaxapp/OpenFrame/temp/OSC_NCS OSC_APP_OSC_TC_PATH=/opt/tmaxapp/OpenFrame/temp/OSC_TC ``` 3. Execute the installer using the properties file as shown:
- ```
- [oframe7@ofdemo ~]$ chmod a+x OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin [oframe7@ofdemo ~]$ ./OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin -f osc.properties
+ ```bash
+ chmod a+x OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin
+ ./OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin -f osc.properties
``` When finished, the "Installation Complete" message is displayed. 4. Verify that the bash profile is updated with OSC variables.
-5. Review the OpenFrame\_OSC7\_0\_Fix2\_InstallLog.log file. It should look something like this:
+5. Review the `OpenFrame_OSC7_0_Fix2_InstallLog.log` file. It should look something like this:
- ```
+ ```output
Summary Installation: Successful.
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
0 FatalError ```
-6. Use vi to open the ofsys.seq configuration file. For example:
-
- ```
- vi $OPENFRAME_HOME/config/ofsys.seq
- ```
+6. Modify the `$OPENFRAME_HOME/config/ofsys.seq` configuration file using any text editor. In the \#BASE and \#BATCH sections, edit the parameters as shown.
-7. In the \#BASE and \#BATCH sections, edit the parameters as shown.
-
- ```
+ ```config
Before changes #BASE ofrsasvr
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
8. Copy the license file. For example:
+ ```bash
+ cp /home/oframe7/oflicense/ofonline/licosc.dat $OPENFRAME_HOME/license
+ cd $OPENFRAME_HOME/license
+ ls -l
```
- [oframe7@ofdemo ~]$ cp /home/oframe7/oflicense/ofonline/licosc.dat
- $OPENFRAME_HOME/license
-
- [oframe7@ofdemo ~]$ cd $OPENFRAME_HOME/license
- oframe@oframe7/OpenFrame/license / ls -l
+ ```output
-rwxr-xr-x. 1 oframe mqm 80 Sep 12 01:37 licosc.dat -rwxr-xr-x. 1 oframe mqm 80 Sep 8 09:40 lictacf.dat -rwxrwxr-x. 1 oframe mqm 80 Sep 3 11:54 lictjes.da ``` 9. To start up and shut down OSC, initialize the CICS region shared memory by typing `osctdlinit OSCOIVP1` at the command prompt.- 10. Run `oscboot` to boot up OSC. The output looks something like this:
- ```
+ ```output
OSCBOOT : pre-processing [ OK ] TMBOOT for node(NODE1) is starting:
Before installing JEUS, install the Apache Ant package, which provides the libra
1. Download Ant binary using the `wget` command. For example:
- ```
+ ```bash
wget http://apache.mirror.cdnetworks.com/ant/binaries/apacheant-1.9.7-bin.tar.gz ``` 2. Use the `tar` utility to extract the binary file and move it to an appropriate location. For example:
- ```
+ ```bash
tar -xvzf apache-ant-1.9.7-bin.tar.gz ``` 3. For efficiency, create a symbolic link:
- ```
+ ```bash
ln -s apache-ant-1.9.7 ant ```
-4. Open the bash profile in vi (`vi .bash_profile`)and update it with the following variables:
+4. Open the bash profile `~/.bash_profile` using any text editor, and update it with the following variables:
- ```
+ ```text
# Ant ENV export ANT_HOME=$HOME/ant export PATH=$HOME/ant/bin:$PATH
Before installing JEUS, install the Apache Ant package, which provides the libra
5. Apply the modified environment variable. For example:
- ```
- [oframe7\@ofdemo \~]\$ source \~/.bash\_profile
+ ```bash
+ source ~/.bash_profile
``` **To install JEUS**
-1. Expand the installer using the `tar` utility. For example:
+1. Extract the installer using the `tar` utility. For example:
- ```
- [oframe7@ofdemo ~]$ tar -zxvf jeus704.tar.gz
+ ```bash
+ mkdir jeus7
+ tar -zxvf jeus704.tar.gz -C jeus7
```
-2. Create a **jeus** folder (`mkdir jeus7`) and unzip the binary.
-3. Change to the **setup** directory (or use the JEUS parameter for your own environment). For example:
+3. Change to the `jeus7/setup` directory (or use the JEUS parameter for your own environment). For example:
- ```
- [oframe7@ofdemo ~]$ cd jeus7/setup/
+ ```bash
+ cd jeus7/setup/
``` 4. Execute `ant clean-all` before performing the build. The output looks something like this:
- ```
+ ```output
Buildfile: /home/oframe7jeus7/setup/build.xml clean-bin:
Before installing JEUS, install the Apache Ant package, which provides the libra
Total time: 0 seconds ```
-5. Make a backup of the domain-config-template.properties file. For example:
-
- ```
- [oframe7@ofdemo ~]$ cp domain-config-template.properties domain-configtemplate.properties.bkp
- ```
-
-6. Open the domain-config-template.properties file in vi:
+5. Make a backup of the `domain-config-template.properties` file. For example:
+ ```bash
+ cp domain-config-template.properties domain-configtemplate.properties.bkp
```
- [oframe7\@ofdemo setup]\$ vi domain-config-template.properties
- ```
-
-7. Change `jeus.password=jeusadmin nodename=Tmaxsoft` to `jeus.password=tmax1234 nodename=ofdemo`
+6. Open the domain-config-template.properties file using any text editor, and change `jeus.password=jeusadmin nodename=Tmaxsoft` to `jeus.password=tmax1234 nodename=ofdemo`
8. Execute the `ant install` command to build JEUS.
-9. Update the .bash\_profile file with the JEUS variables as shown:
+9. Update the `~/.bash_profile` file with the JEUS variables as shown:
- ```
+ ```text
# JEUS ENV export JEUS_HOME=/opt/tmaxui/jeus7 PATH="/opt/tmaxui/jeus7/bin:/opt/tmaxui/jeus7/lib/system:/opt/tmaxui/jeus7/webserver/bin:$ {PATH}" export PATH
Before installing JEUS, install the Apache Ant package, which provides the libra
10. Execute the bash profile. For example:
- ```
- [oframe7@ofdemo setup]$ . .bash_profile
+ ```bash
+ . .bash_profile
```
-11. *Optional*. Create an alias for easy shutdown and boot of JEUS components:
+11. *Optional*. Create an alias for easy shutdown and boot of JEUS components, using the following commands:
- ```
+ ```bash
# JEUS alias alias dsboot='startDomainAdminServer -domain jeus_domain -u administrator -p jeusadmin' alias msboot='startManagedServer -domain jeus_domain -server server1 -u administrator -p jeusadmin'
- alias msdown=ΓÇÿjeusadmin -u administrator -p tmax1234 "stop-server server1ΓÇ£ΓÇÖ
- alias dsdown=ΓÇÿjeusadmin -domain jeus_domain -u administrator -p tmax1234 "local-shutdownΓÇ£ΓÇÖ
+ alias msdown=`jeusadmin -u administrator -p tmax1234 "stop-server server1"'
+ alias dsdown=`jeusadmin -domain jeus_domain -u administrator -p tmax1234 "local-shutdown"'
``` 12. To verify the installation, start the domain admin server as shown:
- ```
- [oframe7@ofdemo ~]$ startDomainAdminServer -domain jeus_domain -u administrator -p jeusadmin
+ ```bash
+ startDomainAdminServer -domain jeus_domain -u administrator -p jeusadmin
``` 13. Verify by web logon using the syntax:
- ```
+ ```url
http://<IP>:<port>/webadmin/login ```
Before installing JEUS, install the Apache Ant package, which provides the libra
14. To change the hostname for server1, click **Lock & Edit**, then click **server1**. In the Server window, change the hostname as follows:
- 1. Change **Nodename** to **ofdemo**.
- 2. Click **OK** on the right side of the window.
- 3. Click **Apply changes** on the lower left side of the window and for description, enter *Hostname change*.
+ 1. Change **Nodename** to **ofdemo**.
+ 2. Click **OK** on the right side of the window.
+ 3. Click **Apply changes** on the lower left side of the window and for description, enter *Hostname change*.
![JEUS WebAdmin screen](media/jeus-02.png)
Before installing JEUS, install the Apache Ant package, which provides the libra
![jeus_domain Server screen](media/jeus-03.png)
-16. Start the managed server process ΓÇ£server1ΓÇ¥ using the following command:
+16. Start the managed server process "server1" using the following command:
- ```
- [oframe7@ofdemo ~]$ startManagedServer -domain jeus_domain -server server1 -u administrator -p jeusadmin
+ ```bash
+ startManagedServer -domain jeus_domain -server server1 -u administrator -p jeusadmin
``` ## Install OFGW
OFGW Is the OpenFrame gateway that supports communication between the 3270 termi
**To install OFGW**
-1. Make sure that JEUS was installed successfully, then verify that the OFGW7\_0\_1\_Generic.bin installer file is present.
+1. Make sure that JEUS was installed successfully, then verify that the `OFGW7_0_1_Generic.bin` installer file is present.
2. Execute the installer. For example:
- ```
- [oframe7@ofdemo ~]$ ./OFGW7_0_1_Generic.bin
+ ```bash
+ ./OFGW7_0_1_Generic.bin
```` 3. Use the following locations for the corresponding prompts:
- - JEUS Home directory
- - JEUS Domain Name
- - JEUS Server Name
- - Tibero Driver
- - Tmax Node ID ofdemo
+
+ - JEUS Home directory
+ - JEUS Domain Name
+ - JEUS Server Name
+ - Tibero Driver
+ - Tmax Node ID ofdemo
4. Accept the rest of the defaults, then press Enter to exit the installer. 5. Verify that the URL for OFGW is working as expected:
- ```
+ ```text
Type URL http://192.168.92.133:8088/webterminal/ and press enter < IP > :8088/webterminal/
OFManager provides operation and management functions for OpenFrame in the web e
**To install OFManager**
-1. Verify that the OFManager7\_Generic.bin installer file is present.
+1. Verify that the `OFManager7_Generic.bin` installer file is present.
2. Execute the installer. For example:
- ```
- OFManager7_Generic.bin
+ ```bash
+ ./OFManager7_Generic.bin
```
-3. Press Enter to continue, then accept the license agreement.
-4. Choose the install folder.
-5. Accept the defaults.
-6. Choose Tibero as the database.
-7. Press Enter to exit the installer.
-8. Verify that the URL for OFManager is working as expected:
+3. Press Enter to continue, then accept the license agreement.
+4. Choose the install folder.
+5. Accept the defaults.
+6. Choose Tibero as the database.
+7. Press Enter to exit the installer.
+8. Verify that the URL for OFManager is working as expected:
- ```
+ ```text
Type URL http://192.168.92.133:8088/ofmanager and press enter < IP > : < PORT > ofmanager Enter ID: ROOT Password: SYS1 ```
That completes the installation of the OpenFrame components.
If you are considering a mainframe migration, our expanding partner ecosystem is available to help you. For detailed guidance about choosing a partner solution, refer to the [Platform Modernization Alliance](/data-migration/). -- [Get started with Azure](../../../../index.yml)-- [Host Integration Server (HIS) documentation](/host-integration-server/)-- [Azure Virtual Data Center Lift-and-Shift Guide](/archive/blogs/azurecat/new-whitepaper-azure-virtual-datacenter-lift-and-shift-guide)
+- [Get started with Azure](../../../../index.yml)
+- [Host Integration Server (HIS) documentation](/host-integration-server/)
+- [Azure Virtual Data Center Lift-and-Shift Guide](/archive/blogs/azurecat/new-whitepaper-azure-virtual-datacenter-lift-and-shift-guide)
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Now that the Network Group is created, and has the correct VNets, create a mesh
## Deploy the connectivity configuration
-To have your configurations applied to your environment, you need to commit the configuration by deployment. You need to deploy the configuration to the **West US** region where the virtual networks are deployed.
+To have your configurations applied to your environment, you need to commit the configuration by deployment. You need to deploy the configuration to the **East US** region where the virtual networks are deployed.
1. Select **Deployments** under **Settings**, then select **Deploy configurations**.
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Last updated 03/15/2023 -+ # Azure Virtual Network Manager FAQ
Should a regional outage occur, all configurations applied to current resources
Yes, you can choose to override and delete an existing peering already created, or allow them to coexist with those created by Azure Virtual Network Manager.
+### How do connected groups differ from virtual network peering regarding establishing connectivity between virtual networks?
+
+In Azure, VNet peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While VNet peering works by creating a 1:1 mapping between each peered VNet, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each VNet without the need for individual peering relationships.
+ ### How can I explicitly allow Azure SQL Managed Instance traffic before having deny rules? Azure SQL Managed Instance has some network requirements. If your security admin rules can block the network requirements, you can use the below sample rules to allow SQLMI traffic with higher priority than the deny rules that can block the traffic of SQL Managed Instance.
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
Previously updated : 03/1/2023- Last updated : 04/20/2023+ # Create a hub and spoke topology with Azure Virtual Network Manager
-In this article, you'll learn how to create a hub and spoke network topology with Azure Virtual Network Manager. With this configuration, you select a virtual network to act as a hub and all spoke virtual networks will have bi-directional peering with only the hub by default. You also can enable direct connectivity between spoke virtual networks and enable the spoke virtual networks to use the virtual network gateway in the hub.
+In this article, you learn how to create a hub and spoke network topology with Azure Virtual Network Manager. With this configuration, you select a virtual network to act as a hub and all spoke virtual networks have bi-directional peering with only the hub by default. You also can enable direct connectivity between spoke virtual networks and enable the spoke virtual networks to use the virtual network gateway in the hub.
> [!IMPORTANT] > Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
In this article, you'll learn how to create a hub and spoke network topology wit
## <a name="group"></a> Create a network group
-This section will help you create a network group containing the virtual networks you'll be using for the hub-and-spoke network topology.
+This section helps you create a network group containing the virtual networks you're using for the hub-and-spoke network topology.
1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
This section will help you create a network group containing the virtual network
:::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Create a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
+1. On the *Create a network group* page, enter a **Name** for the network group. This example uses the name **myNetworkGroup**. Select **Add** to create the network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-1. You'll see the new network group added to the *Network Groups* page.
+1. The *Network Groups* page lists the new network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups."::: ## Define network group members
To manually add the desired virtual networks for your Mesh configuration to your
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page."::: 1. To review the network group membership manually added, select **Group Members** on the *Network Group* page under **Settings**.
- :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
## Create a hub and spoke connectivity configuration
-This section will guide you through how to create a hub-and-spoke configuration with the network group you created in the previous section.
-
-1. Select **Configuration** under *Settings*, then select **+ Add a configuration**.
-
- :::image type="content" source="./media/how-to-create-hub-and-spoke/configuration-list.png" alt-text="Screenshot of the configurations list.":::
+This section guides you through how to create a hub-and-spoke configuration with the network group you created in the previous section.
1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration. :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Add a connectivity configuration* page, enter, or select the following information:
+1. On the **Basics** page, enter the following information, and select **Next: Topology >**.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
| Setting | Value | | - | -- | | Name | Enter a *name* for this configuration. |
- | Description | *Optional* Enter a description about what this configuration will do. |
- | Topology | Select the **Hub and spoke** topology. |
- | Hub | Select a virtual network that will act as the hub virtual network. |
- | Existing peerings | Select this checkbox if you want to remove all previously created VNet peering between virtual networks in the network group defined in this configuration. |
+ | Description | *Optional* Enter a description about what this configuration does. |
+
+1. On the **Topology** tab, select the **Hub and spoke** topology.
+ :::image type="content" source="media/how-to-create-hub-and-spoke/topology.png" alt-text="Screenshot of Add Topology screen for hub and spoke topology.":::
+
+1. Select **Delete existing peerings** checkbox if you want to remove all previously created VNet peering between virtual networks in the network group defined in this configuration, and then select **Select a hub**.
+1. On the **Select a hub** page, Select a virtual network that acts as the hub virtual network and select **Select**.
+
+ :::image type="content" source="media/how-to-create-hub-and-spoke/select-hub.png" alt-text="Screenshot of Select a hub list.":::
+
1. Then select **+ Add network groups**.
-1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then select **Add** to save.
+1. On the **Add network groups** page, select the network groups you want to add to this configuration. Then select **Add** to save.
-1. You'll see the following three options appear next to the network group name under *Spoke network groups*:
+1. The following three options appear next to the network group name under **Spoke network groups**:
- :::image type="content" source="./media/how-to-create-hub-and-spoke/spokes-settings.png" alt-text="Screenshot of spoke network groups settings." lightbox="./media/how-to-create-hub-and-spoke/spokes-settings-expanded.png":::
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/spokes-settings.png" alt-text="Screenshot of spoke network groups settings.":::
+ * *Direct connectivity*: Select **Enable peering within network group** if you want to establish VNet peering between virtual networks in the network group of the same region. * *Global Mesh*: Select **Enable mesh connectivity across regions** if you want to establish VNet peering for all virtual networks in the network group across regions.
This section will guide you through how to create a hub-and-spoke configuration
Select the settings you want to enable for each network group.
-1. Finally, select **Add** to create the hub-and-spoke connectivity configuration.
+1. Finally, select **Review + Create > Create** to create the hub-and-spoke connectivity configuration.
## Deploy the hub and spoke configuration
-To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual networks are created.
+To have this configuration take effect in your environment, you need to deploy the configuration to the regions where your selected virtual networks are created.
1. Select **Deployments** under *Settings*, then select **Deploy a configuration**.
+1. On the **Deploy a configuration** page, select the following settings:
-1. On the *Deploy a configuration* select the following settings:
-
- :::image type="content" source="./media/how-to-create-hub-and-spoke/deploy.png" alt-text="Screenshot of deploy a configuration page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deploy-configuration.png" alt-text="Screenshot of deploy a configuration page.":::
| Setting | Value | | - | -- |
- | Configuration type | Select **Connectivity**. |
- | Configurations | Select the name of the configuration you created in the previous section. |
+ | Configurations | Select **Include connectivity configurations in your goal state** . |
+ | Connectivity configurations | Select the name of the configuration you created in the previous section. |
| Target regions | Select all the regions that apply to virtual networks you select for the configuration. |
-1. Select **Deploy** and then select **OK** to commit the configuration to the selected regions.
+1. Select **Next** and then select **Deploy** to complete the deployment.
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deployment-confirmation.png" alt-text="Screenshot of deployment confirmation message.":::
+
+1. The deployment displays in the list for the selected region. The deployment of the configuration can take a few minutes to complete.
-1. The deployment of the configuration can take up to 15-20 minutes, select the **Refresh** button to check on the status of the deployment.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deployment-in-progress.png" alt-text="Screenshot of configuration deployment in progress status.":::
## Confirm deployment
To have this configuration take effect in your environment, you'll need to deplo
## Next steps - Learn about [Security admin rules](concept-security-admins.md)-- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
+- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
virtual-network-manager How To Create Mesh Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network.md
Previously updated : 03/22/2023- Last updated : 04/20/2023+ # Create a mesh network topology with Azure Virtual Network Manager
-In this article, you'll learn how to create a mesh network topology using Azure Virtual Network Manager. With this configuration, all the virtual networks of the same region in the same network group can communicate with one another. You can enable cross region connectivity by enabling the global mesh setting in the connectivity configuration.
+In this article, you learn how to create a mesh network topology using Azure Virtual Network Manager. With this configuration, all the virtual networks of the same region in the same network group can communicate with one another. You can enable cross region connectivity by enabling the global mesh setting in the connectivity configuration.
> [!IMPORTANT] > Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
In this article, you'll learn how to create a mesh network topology using Azure
## <a name="group"></a> Create a network group
-This section will help you create a network group containing the virtual networks you'll be using for the mesh network topology.
+This section helps you create a network group containing the virtual networks you're using for the mesh network topology.
1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
This section will help you create a network group containing the virtual network
:::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Create a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
+1. On the *Create a network group* page, enter a **Name** for the network group. This example uses the name **myNetworkGroup**. Select **Add** to create the network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-1. You'll see the new network group added to the *Network Groups* page.
+1. The *Network Groups* page now lists the new network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
-1. Once your network group is created, you'll add virtual networks as members. Choose one of the options: *[Manually add membership](concept-network-groups.md#static-membership)* or *[Create policy to dynamically add members](concept-network-groups.md#dynamic-membership)*.
- ## Define network group members Azure Virtual Network manager allows you two methods for adding membership to a network group. You can manually add virtual networks or use Azure Policy to dynamically add virtual networks based on conditions. This how-to covers [manually adding membership](concept-network-groups.md#static-membership). For information on defining group membership with Azure Policy, see [Define network group membership with Azure Policy](concept-network-groups.md#dynamic-membership).
To manually add the desired virtual networks for your Mesh configuration to your
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page."::: 1. To review the network group membership manually added, select **Group Members** on the *Network Group* page under **Settings**.
- :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
## Create a mesh connectivity configuration
-This section will guide you through how to create a mesh configuration with the network group you created in the previous section.
+This section guides you through how to create a mesh configuration with the network group you created in the previous section.
1. Select **Configurations** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of the configurations list.":::
-
-1. Select **Connectivity configuration** from the drop-down menu.
+1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration.
- :::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Add a connectivity configuration* page, enter the following information:
+1. On the **Basics** page, enter the following information, and select **Next: Topology >**.
- :::image type="content" source="media/how-to-create-mesh-network/add-config-name.png" alt-text="Screenshot of add a connectivity configuration page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
| Setting | Value | | - | -- | | Name | Enter a *name* for this configuration. |
- | Description | *Optional* Enter a description about what this configuration will do. |
+ | Description | *Optional* Enter a description about what this configuration does. |
-1. Select **Next: Topology >** and select **Mesh** as the topology. Then select **+ Add** under *Network groups*.
+1. On the **Topology** tab, select the **Mesh** topology if not selected, and leave the **Enable mesh connectivity across regions** unchecked. Cross-region connectivity isn't required for this set up since all the virtual networks are in the same region.
- :::image type="content" source="media/how-to-create-mesh-network/add-connectivity-config.png" alt-text="Screenshot of Add a connectivity configuration page and options.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/topology-configuration.png" alt-text="Screenshot of topology selection for network group connectivity configuration.":::
1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then select **Select** to save.
This section will guide you through how to create a mesh configuration with the
## Deploy the mesh configuration
-To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual networks are created.
+To have this configuration take effect in your environment, you need to deploy the configuration to the regions where your selected virtual networks are created.
1. Select **Deployments** under *Settings*, then select **Deploy configuration**.
To have this configuration take effect in your environment, you'll need to deplo
| - | -- | | Configurations | Select **Include connectivity configurations in your goal state**. | | Connectivity Configurations | Select the name of the configuration you created in the previous section. |
- | Target regions | Select all the regions where the configuration will be applied to virtual networks. |
+ | Target regions | Select all the regions where the configuration is applied to virtual networks. |
1. Select **Next** and then select **Deploy** to commit the configuration to the selected regions.
virtual-network-manager How To Exclude Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-exclude-elements.md
You only want to select virtual networks that contain **VNet-A** in the name. To
> [!IMPORTANT] > The **basic editor** is only available during the creation of an Azure Policy. Once a policy is created, all edits will be done using JSON in the **Policies** section of virtual network manager or via Azure Policy. >
-> When using the basic editor, your condition options will be limited through the portal experience. For complex conditions like creating a network group for VNets based on a customer-defined tag, you can used the advanced editor. Learn more about [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+> When using the basic editor, your condition options are limited through the portal experience. For complex conditions like creating a network group for VNets based on a [customer-defined tag](#example-3-using-custom-tag-values-with-advanced-editor), you must use the advanced editor. Learn more about [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
## Advanced editor
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
Previously updated : 08/11/2022 Last updated : 04/19/2023 ms.devlang: azurecli # Create an Azure Virtual Machine with a dual-stack network using the Azure CLI
-In this article, you'll create a virtual machine in Azure with the Azure CLI. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
+In this article, you create a virtual machine in Azure with the Azure CLI. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
## Prerequisites
Create a resource group with [az group create](/cli/azure/group#az-group-create)
## Create a virtual network
-In this section, you'll create a dual-stack virtual network for the virtual machine.
+In this section, you create a dual-stack virtual network for the virtual machine.
Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network.
Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to
## Create public IP addresses
-You'll create two public IP addresses in this section, IPv4 and IPv6.
+You create two public IP addresses in this section, IPv4 and IPv6.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP addresses.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
``` ## Create a network security group
-In this section, you'll create a network security group for the virtual machine and virtual network.
+In this section, you create a network security group for the virtual machine and virtual network.
Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create the network security group.
Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to cre
### Create network security group rules
-You'll create a rule to allow connections to the virtual machine on port 22 for SSH. An extra rule is created to allow all ports for outbound connections.
+You create a rule to allow connections to the virtual machine on port 22 for SSH. An extra rule is created to allow all ports for outbound connections.
Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create the network security group rules.
Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule
## Create virtual machine
-In this section, you'll create the virtual machine and its supporting resources.
+In this section, you create the virtual machine and its supporting resources.
### Create network interface
-You'll use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+You use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
```azurecli-interactive az network nic create \
Use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-i
--output tsv ```
-```bash
+```azurecli-interactive
user@Azure:~$ az network public-ip show \ > --resource-group myResourceGroup \ > --name myPublicIP-IPv4 \
user@Azure:~$ az network public-ip show \
--output tsv ```
-```bash
+```azurecli-interactive
user@Azure:~$ az network public-ip show \ > --resource-group myResourceGroup \ > --name myPublicIP-IPv6 \
user@Azure:~$ az network public-ip show \
Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine.
-```bash
+```azurecli-interactive
ssh azureuser@20.119.201.208 ```
virtual-network Ip Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ip-services-overview.md
Previously updated : 10/01/2021 Last updated : 04/19/2023
IP services are a collection of IP address related services that enable communic
IP services consist of: * Public IP addresses+ * Public IP address prefixes+ * Custom IP address prefixes (BYOIP)+ * Private IP addresses+ * Routing preference+ * Routing preference unmetered ## Public IP addresses
Public IPs are used by internet resources to communicate inbound to resources in
A public IP address is a resource with its own properties. Some of the resources that you can associate with a public IP address are: * Virtual machine network interfaces+ * Internet-facing load balancers+ * Virtual Network gateways (VPN/ER)+ * NAT gateways+ * Application gateways+ * Azure Firewall+ * Bastion Host For more information about public IP addresses, see [Public IP addresses](./public-ip-addresses.md) and [Create, change, or delete an Azure public IP address](./virtual-network-public-ip-address.md)
Public IP prefixes are reserved ranges of IP addresses in Azure. Public IP addre
The following public IP prefix sizes are available: - /28 (IPv4) or /124 (IPv6) = 16 addresses+ - /29 (IPv4) or /125 (IPv6) = 8 addresses+ - /30 (IPv4) or /126 (IPv6) = 4 addresses+ - /31 (IPv4) or /127 (IPv6) = 2 addresses Prefix size is specified as a Classless Inter-Domain Routing (CIDR) mask size.
Private IPs allow communication between resources in Azure. Azure assigns privat
Some of the resources that you can associate a private IP address with are: * Virtual machines+ * Internal load balancers+ * Application gateways+ * Private endpoints For more information about private IP addresses, see [Private IP addresses](./private-ip-addresses.md).
For more information about routing preference unmetered, see [What is Routing Pr
Get started creating IP services resources: - [Create a public IP address using the Azure portal](./create-public-ip-portal.md).+ - [Create a public IP address prefix using the Azure portal](./create-public-ip-prefix-portal.md).+ - [Configure a private IP address for a VM using the Azure portal](./virtual-networks-static-private-ip-arm-pportal.md).+ - [Configure routing preference for a public IP address using the Azure portal](./routing-preference-portal.md).
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
Previously updated : 05/20/2021 Last updated : 04/19/2023
You create a public IP address prefix in an Azure region and subscription by spe
## Benefits - Creation of static public IP address resources from a known range. Addresses that you create using from the prefix can be assigned to any Azure resource that you can assign a standard SKU public IP address.+ - When you delete the individual public IPs, they're *returned* to your reserved range for later reuse. The IP addresses in your public IP address prefix are reserved for your use until you delete your prefix.+ - You can see which IP addresses that are given and available within the prefix range. ## Prefix sizes
You create a public IP address prefix in an Azure region and subscription by spe
The following public IP prefix sizes are available: - /28 (IPv4) or /124 (IPv6) = 16 addresses+ - /29 (IPv4) or /125 (IPv6) = 8 addresses+ - /30 (IPv4) or /126 (IPv6) = 4 addresses+ - /31 (IPv4) or /127 (IPv6) = 2 addresses Prefix size is specified as a Classless Inter-Domain Routing (CIDR) mask size.
You can associate the following resources to a static public IP address from a p
| Azure Firewall | You can use a public IP from a prefix for outbound SNAT. All outbound virtual network traffic is translated to the [Azure Firewall](../../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) public IP. | To associate an IP from a prefix to your firewall: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you [deploy the Azure firewall](../../firewall/tutorial-firewall-deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-the-firewall), be sure to select the IP you previously gave from the prefix.| | VPN Gateway (AZ SKU), Application Gateway v2, NAT Gateway | You can use a public IP from a prefix for your gateway | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](../../vpn-gateway/tutorial-create-gateway-portal.md), [Application Gateway](../../application-gateway/quick-create-portal.md#create-an-application-gateway), or [NAT Gateway](../nat-gateway/quickstart-create-nat-gateway-portal.md), be sure to select the IP you previously gave from the prefix.|
-Additionally, the Public IP address prefix resource can be utilized directly by certain resources:
+The following resources utilize a public IP address prefix:
Resource|Scenario|Steps| ||||
-|Virtual machine scale sets | You can use a public IP address prefix to generate instance-level IPs in a virtual machine scale set, though individual public IP resources won't be created. | Use a [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-with-public-ip-prefix) with instructions to use this prefix for public IP configuration as part of the scale set creation. (Note that the zonal properties of the prefix will be passed to the instance IPs, though they will not show in the output; see [Networking for Virtual Machine Scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine) for more information.) |
+|Virtual Machine Scale Sets | You can use a public IP address prefix to generate instance-level IPs in a Virtual Machine Scale Set. Individual public IP resources aren't created. | Use a [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-with-public-ip-prefix) with instructions to use this prefix for public IP configuration as part of the scale set creation. (Zonal properties of the prefix are passed to the instance IPs and aren't shown in the output. For more information, see [Networking for Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine)) |
| Standard load balancers | A public IP address prefix can be used to scale a load balancer by [using all IPs in the range for outbound connections](../../load-balancer/outbound-rules.md#scale). | To associate a prefix to your load balancer: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the load balancer, select the IP prefix as associated with the frontend of your load balancer. |
-| NAT Gateway | A public IP prefix can be used to scale a NAT gateway by using the public IPs in the prefix for outbound connections. | To associate a prefix to your NAT Gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the NAT Gateway, select the IP prefix as the Outbound IP. (Note that a NAT Gateway can have no more than 16 IPs in total, so a public IP prefix of /28 length is the maximum size that can be used.) |
+| NAT Gateway | A public IP prefix can be used to scale a NAT gateway by using the public IPs in the prefix for outbound connections. | To associate a prefix to your NAT Gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the NAT Gateway, select the IP prefix as the Outbound IP. (A NAT Gateway can have no more than 16 IPs in total. A public IP prefix of /28 length is the maximum size that can be used.) |
## Limitations - You can't specify the set of IP addresses for the prefix (though you can specify which IP you want from the prefix). Azure gives the IP addresses for the prefix, based on the size that you specify. Additionally, all public IP addresses created from the prefix must exist in the same Azure region and subscription as the prefix. Addresses must be assigned to resources in the same region and subscription.+ - You can create a prefix of up to 16 IP addresses. Review [Network limits increase requests](../../azure-portal/supportability/networking-quota-requests.md) and [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for more information.-- The size of the range cannot be modified after the prefix has been created.+
+- The size of the range can't be modified after the prefix has been created.
+ - Only static public IP addresses created with the standard SKU can be assigned from the prefix's range. To learn more about public IP address SKUs, see [public IP address](public-ip-addresses.md#public-ip-addresses).+ - Addresses from the range can only be assigned to Azure Resource Manager resources. Addresses can't be assigned to resources in the classic deployment model.+ - You can't delete a prefix if any addresses within it are assigned to public IP address resources associated to a resource. Dissociate all public IP address resources that are assigned IP addresses from the prefix first. For more information on disassociating public IP addresses, see [Manage public IP addresses](virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address).-- IPv6 is supported on basic public IPs with **dynamic** allocation only. Dynamic allocation means the IPv6 address will change if you delete and redeploy your resource in Azure. +
+- IPv6 is supported on basic public IPs with **dynamic** allocation only. Dynamic allocation means the IPv6 address changes if you delete and redeploy your resource in Azure.
+ - Standard IPv6 public IPs support static (reserved) allocation. + - Standard internal load balancers support dynamic allocation from within the subnet to which they're assigned. ## Pricing
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
Previously updated : 12/13/2022 Last updated : 04/19/2023
Create a resource group with [az group create](/cli/azure/group#az-group-create)
## Create a virtual network
-In this section, you'll create a virtual network for the virtual machine.
+In this section, you create a virtual network for the virtual machine.
Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
## Create a network security group
-In this section, you'll create a network security group for the virtual machine and virtual network.
+In this section, you create a network security group for the virtual machine and virtual network.
Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create the network security group.
Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to cre
### Create network security group rules
-You'll create a rule to allow connections to the virtual machine on port 22 for SSH.
+You create a rule to allow connections to the virtual machine on port 22 for SSH.
Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create the network security group rules.
Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule
``` ## Create a network interface
-You'll use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+You use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
```azurecli-interactive az network nic create \
virtual-network Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/manage-nat-gateway.md
Title: Manage a NAT gateway-+ description: Learn how to create and remove a NAT gateway resource from a virtual network subnet. Add and remove public IP addresses and prefixes used for outbound connectivity.
az network nat gateway update \
To learn more about Azure Virtual Network NAT and its capabilities, see the following articles: -- [What is Azure Virtual Network NAT?](nat-overview.md)
+- [What is Azure NAT Gateway?](nat-overview.md)
- [NAT gateway and availability zones](nat-availability-zones.md) - [Design virtual networks with NAT gateway](nat-gateway-resource.md)
virtual-network Nat Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-availability-zones.md
Title: NAT gateway and availability zones-+ description: Key concepts and design guidance on using NAT gateway with availability zones.
If your scenario requires inbound endpoints, you have two options:
## Next steps * Learn more about [Azure regions and availability zones](../../availability-zones/az-overview.md)
-* Learn more about [Azure Virtual network NAT](./nat-overview.md)
-* Learn more about [Azure Load balancer](../../load-balancer/load-balancer-overview.md)
+* Learn more about [Azure NAT Gateway](./nat-overview.md)
+* Learn more about [Azure Load balancer](../../load-balancer/load-balancer-overview.md)
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
Title: Design virtual networks with NAT gateway-+ description: Learn how to design virtual networks that use Network Address Translation (NAT) gateway resources.
# Design virtual networks with NAT gateway
-NAT gateway provides outbound internet connectivity for one or more subnets of a virtual network. Once NAT gateway is associated to a subnet, NAT provides source network address translation (SNAT) for that subnet. NAT gateway specifies which static IP addresses virtual machines use when creating outbound flows. Static IP addresses come from public IP addresses, public IP prefixes, or both. If a public IP prefix is used, all IP addresses of the entire public IP prefix are consumed by a NAT gateway. A NAT gateway can use up to 16 static IP addresses from either.
+NAT gateway provides outbound internet connectivity for one or more subnets of a virtual network. Once NAT gateway is associated to a subnet, NAT gateway provides source network address translation (SNAT) for that subnet. NAT gateway specifies which static IP addresses virtual machines use when creating outbound flows. Static IP addresses come from public IP addresses, public IP prefixes, or both. If a public IP prefix is used, all IP addresses of the entire public IP prefix are consumed by a NAT gateway. A NAT gateway can use up to 16 static IP addresses from either.
:::image type="content" source="./media/nat-overview/flow-direction1.png" alt-text="Diagram of a NAT gateway resource with virtual machines and a Virtual Machine Scale Set.":::
-*Figure: Virtual Network NAT for outbound to internet*
+*Figure: NAT gateway for outbound to internet*
## How to deploy NAT
The following examples demonstrate co-existence of a load balancer or instance-l
:::image type="content" source="./media/nat-overview/flow-direction2.png" alt-text="Diagram of a NAT gateway resource that consumes all IP addresses for a public IP prefix. The NAT gateway directs traffic for two subnets of VMs and a Virtual Machine Scale Set.":::
-*Figure: Virtual Network NAT and VM with an instance level public IP*
+*Figure: NAT gateway and VM with an instance level public IP*
| Direction | Resource | |::|::|
VM will use NAT gateway for outbound. Inbound originated isn't affected.
:::image type="content" source="./media/nat-overview/flow-direction3.png" alt-text="Diagram that depicts a NAT gateway that supports outbound traffic to the internet from a virtual network and inbound traffic with a public load balancer.":::
-*Figure: Virtual Network NAT and VM with a standard public load balancer*
+*Figure: NAT gateway and VM with a standard public load balancer*
| Direction | Resource | |::|::|
Any outbound configuration from a load-balancing rule or outbound rules is super
### Monitor outbound network traffic with NSG flow logs
-A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from NAT, you can enable NSG flow logs.
+A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from the virtual machine behind your NAT gateway, enable NSG flow logs.
To learn more about NSG flow logs, see [NSG Flow Log Overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
Review the following section for details and the [troubleshooting article](./tro
## Scalability
-Scaling NAT gateway is primarily a function of managing the shared, available SNAT port inventory. NAT needs sufficient SNAT port inventory for expected peak outbound flows for all subnets that are attached to a NAT gateway. You can use public IP addresses, public IP prefixes, or both to create SNAT port inventory.
+Scaling NAT gateway is primarily a function of managing the shared, available SNAT port inventory. NAT gateway needs sufficient SNAT port inventory for expected peak outbound flows for all subnets that are attached to a NAT gateway. You can use public IP addresses, public IP prefixes, or both to create SNAT port inventory.
A single NAT gateway can scale up to 16 IP addresses. Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can scale up to over 1 million SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
NAT gateway dynamically allocates SNAT ports across a subnet's private resources
:::image type="content" source="./media/nat-overview/lb-vnnat-chart.png" alt-text="Diagram that depicts the inventory of all available SNAT ports used by any VM on subnets configured with NAT.":::
-*Figure: Virtual Network NAT on-demand outbound SNAT*
+*Figure: NAT gateway on-demand outbound SNAT*
Pre-allocation of SNAT ports to each virtual machine is required for other SNAT methods. This pre-allocation of SNAT ports can cause SNAT port exhaustion on some virtual machines while others still have available SNAT ports for connecting outbound. With NAT gateway, pre-allocation of SNAT ports isn't required, which means SNAT ports aren't left unused by VMs not actively needing them.
Design recommendations for configuring timers:
## Next steps -- Review [virtual network NAT](nat-overview.md).
+- Review [Azure NAT Gateway](nat-overview.md).
- Learn about [metrics and alerts for NAT gateway](nat-metrics.md).
virtual-network Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md
Title: Metrics and alerts for Azure Virtual Network NAT
+ Title: Metrics and alerts for Azure NAT Gateway
-description: Understand Azure Monitor metrics and alerts available for Virtual Network NAT.
+description: Understand Azure Monitor metrics and alerts available for NAT gateway.
Last updated 04/12/2022
-# Azure Virtual Network NAT metrics and alerts
+# Azure NAT Gateway metrics and alerts
This article provides an overview of all NAT gateway metrics and diagnostic capabilities. This article provides general guidance on how to use metrics and alerts to monitor, manage, and [troubleshoot](troubleshoot-nat.md) your NAT gateway resource.
-Azure Virtual Network NAT gateway provides the following diagnostic capabilities:
+Azure NAT Gateway provides the following diagnostic capabilities:
- Multi-dimensional metrics and alerts through Azure Monitor. You can use these metrics to monitor and manage your NAT gateway and to assist you in troubleshooting issues.
Azure Virtual Network NAT gateway provides the following diagnostic capabilities
:::image type="content" source="./media/nat-overview/flow-direction1.png" alt-text="Diagram of a NAT gateway that consumes all IP addresses for a public IP prefix. The NAT gateway directs traffic to and from two subnets of VMs and a virtual machine scale set.":::
-*Figure: Virtual Network NAT for outbound to Internet*
+*Figure: Azure NAT Gateway for outbound to Internet*
## Metrics overview
For more information on what each metric is showing you and how to analyze these
## Next steps
-* Learn about [Virtual Network NAT](nat-overview.md)
+* Learn about [Azure NAT Gateway](nat-overview.md)
* Learn about [NAT gateway resource](nat-gateway-resource.md) * Learn about [Azure Monitor](../../azure-monitor/overview.md) * Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md).
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Azure NAT Gateway is a software defined networking service. A NAT gateway won't
* NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
- * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md).
+ * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure NAT Gateway](./tutorial-migrate-outbound-nat.md).
* NAT gateway takes precedence over other outbound scenarios (including Load balancer and instance-level public IP addresses) and replaces the default Internet destination of a subnet.
virtual-network Quickstart Create Nat Gateway Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-bicep.md
Title: 'Create a NAT gateway - Bicep'-+ description: This quickstart shows how to create a NAT gateway using Bicep.-+
# Quickstart: Create a NAT gateway - Bicep
-Get started with Virtual Network NAT using Bicep. This Bicep file deploys a virtual network, a NAT gateway resource, and Ubuntu virtual machine. The Ubuntu virtual machine is deployed to a subnet that is associated with the NAT gateway resource.
+Get started with Azure NAT Gateway using Bicep. This Bicep file deploys a virtual network, a NAT gateway resource, and Ubuntu virtual machine. The Ubuntu virtual machine is deployed to a subnet that is associated with the NAT gateway resource.
[!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
In this quickstart, you created a:
The virtual machine is deployed to a virtual network subnet associated with the NAT gateway.
-To learn more about Virtual Network NAT and Bicep, continue to the articles below.
+To learn more about Azure NAT Gateway and Bicep, continue to the articles below.
-* Read an [Overview of Virtual Network NAT](nat-overview.md)
+* Read an [Overview of Azure NAT Gateway](nat-overview.md)
* Read about the [NAT Gateway resource](nat-gateway-resource.md) * Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md)
virtual-network Quickstart Create Nat Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md
Title: 'Quickstart: Create a NAT gateway - Azure CLI'-+ description: Get started creating a NAT gateway using the Azure CLI.
# Quickstart: Create a NAT gateway using the Azure CLI
-This quickstart shows you how to use the Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
+This quickstart shows you how to use the Azure NAT Gateway service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
the virtual network, virtual machine, and NAT gateway with the following CLI com
## Next steps
-For more information on Azure Virtual Network NAT, see:
+For more information on Azure NAT Gateway, see:
> [!div class="nextstepaction"] > [Virtual Network NAT overview](nat-overview.md)
virtual-network Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md
Title: 'Quickstart: Create a NAT gateway - Azure portal'-+ description: This quickstart shows how to create a NAT gateway by using the Azure portal.
# Quickstart: Create a NAT gateway using the Azure portal
-This quickstart shows you how to use the Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
+This quickstart shows you how to use the Azure NAT Gateway service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
## Prerequisites
the virtual network, virtual machine, and NAT gateway with the following steps:
## Next steps
-For more information on Azure Virtual Network NAT, see:
+For more information on Azure NAT Gateway, see:
> [!div class="nextstepaction"]
-> [Virtual Network NAT overview](nat-overview.md)
+> [Azure NAT Gateway overview](nat-overview.md)
virtual-network Quickstart Create Nat Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-powershell.md
Title: 'Quickstart: Create a NAT gateway - PowerShell'-+ description: Get started creating a NAT gateway using Azure PowerShell.
# Quickstart: Create a NAT gateway using Azure PowerShell
-This quickstart shows you how to use the Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
+This quickstart shows you how to use the Azure NAT Gateway service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
## Prerequisites
Remove-AzResourceGroup -Name 'myResourceGroupNAT' -Force
## Next steps
-For more information on Azure Virtual Network NAT, see:
+For more information on Azure NAT Gateway, see:
> [!div class="nextstepaction"]
-> [Virtual Network NAT overview](nat-overview.md)
+> [Azure NAT Gateway overview](nat-overview.md)
virtual-network Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-template.md
Title: 'Create a NAT gateway - Resource Manager Template'-+ description: This quickstart shows how to create a NAT gateway by using the Azure Resource Manager template (ARM template).-+
# Quickstart: Create a NAT gateway - ARM template
-Get started with Virtual Network NAT by using an Azure Resource Manager template (ARM template). This template deploys a virtual network, a NAT gateway resource, and Ubuntu virtual machine. The Ubuntu virtual machine is deployed to a subnet that is associated with the NAT gateway resource.
+Get started with Azure NAT Gateway by using an Azure Resource Manager template (ARM template). This template deploys a virtual network, a NAT gateway resource, and Ubuntu virtual machine. The Ubuntu virtual machine is deployed to a subnet that is associated with the NAT gateway resource.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
In this quickstart, you created a:
The virtual machine is deployed to a virtual network subnet associated with the NAT gateway.
-To learn more about Virtual Network NAT and Azure Resource Manager, continue to the articles below.
+To learn more about Azure NAT Gateway and Azure Resource Manager, continue to the articles below.
-* Read an [Overview of Virtual Network NAT](nat-overview.md)
+* Read an [Overview of Azure NAT Gateway](nat-overview.md)
* Read about the [NAT Gateway resource](nat-gateway-resource.md) * Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)
virtual-network Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/resource-health.md
Title: Azure Virtual Network NAT Resource Health
+ Title: Azure NAT Gateway Resource Health
-description: Understand how to use resource health for Virtual Network NAT.
+description: Understand how to use resource health for NAT gateway.
-# Customer intent: As an IT administrator, I want to understand how to use resource health to monitor Virtual Network NAT.
+# Customer intent: As an IT administrator, I want to understand how to use resource health to monitor NAT gateway.
Last updated 04/25/2022
-# Azure Virtual Network NAT Resource Health
+# Azure NAT Gateway Resource Health
This article provides guidance on how to use Azure Resource Health to monitor and troubleshoot connectivity issues with your NAT gateway resource. Resource health provides an automatic check to keep you informed on the current availability of your NAT gateway.
To view the health of your NAT gateway resource:
## Next steps -- Learn about [Virtual Network NAT](./nat-overview.md)
+- Learn about [Azure NAT Gateway](./nat-overview.md)
- Learn about [metrics and alerts for NAT gateway](./nat-metrics.md) - Learn about [troubleshooting NAT gateway resources](./troubleshoot-nat.md)-- Learn about [Azure resource health](../../service-health/resource-health-overview.md)
+- Learn about [Azure resource health](../../service-health/resource-health-overview.md)
virtual-network Troubleshoot Nat And Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-and-azure-services.md
Title: Troubleshoot outbound connectivity with Azure services
-description: Troubleshoot issues with Virtual Network NAT and Azure services.
+description: Troubleshoot issues with NAT gateway and Azure services.
Update your idle timeout timer configuration on your User-Assigned NAT gateway w
### How NAT gateway integration with Azure Firewall works
-Azure Firewall can provide outbound connectivity to the internet from virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, often, customers require much fewer public IP addresses for connecting outbound due to various architectural requirements and limitations by destination endpoints for the number of public IP addresses they can allowlist. One method by which to get around this allowlist IP limitation and to also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall. To learn how to set up NAT gateway in an Azure Firewall subnet, see [Scale SNAT ports with Azure Virtual Network NAT](../../firewall/integrate-with-nat-gateway.md).
+Azure Firewall can provide outbound connectivity to the internet from virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, often, customers require much fewer public IP addresses for connecting outbound due to various architectural requirements and limitations by destination endpoints for the number of public IP addresses they can allowlist. One method by which to get around this allowlist IP limitation and to also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall. To learn how to set up NAT gateway in an Azure Firewall subnet, see [Scale SNAT ports with Azure NAT Gateway](../../firewall/integrate-with-nat-gateway.md).
## Azure Databricks
We're always looking to improve the experience of our customers. If you're exper
To learn more about NAT gateway, see:
-* [Virtual Network NAT](./nat-overview.md)
+* [Azure NAT Gateway](./nat-overview.md)
* [NAT gateway resource](./nat-gateway-resource.md)
-* [Metrics and alerts for NAT gateway resources](./nat-metrics.md)
+* [Metrics and alerts for NAT gateway resources](./nat-metrics.md)
virtual-network Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-connectivity.md
Title: Troubleshoot Azure Virtual Network NAT connectivity
+ Title: Troubleshoot Azure NAT Gateway connectivity
-description: Troubleshoot connectivity issues with Virtual Network NAT.
+description: Troubleshoot connectivity issues with NAT gateway.
Last updated 08/29/2022
-# Troubleshoot Azure Virtual Network NAT connectivity
+# Troubleshoot Azure NAT Gateway connectivity
This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway resource. This article also provides best practices on how to design applications to use outbound connections efficiently.
We're always looking to improve the experience of our customers. If you're exper
To learn more about NAT gateway, see:
-* [Virtual Network NAT](./nat-overview.md)
+* [Azure NAT Gateway](./nat-overview.md)
* [NAT gateway resource](./nat-gateway-resource.md) * [Metrics and alerts for NAT gateway resources](./nat-metrics.md)
virtual-network Tutorial Dual Stack Outbound Nat Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer.md
Title: 'Tutorial: Configure dual-stack outbound connectivity with a NAT gateway and a public load balancer'-+ description: Learn how to configure outbound connectivity for a dual stack network with a NAT gateway and a public load balancer.
virtual-network Tutorial Hub Spoke Nat Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md
Title: 'Tutorial: Integrate NAT gateway with Azure Firewall in a hub and spoke network'-+ description: Learn how to integrate a NAT gateway and Azure Firewall in a hub and spoke network.
If you're not going to continue to use this application, delete the created reso
Advance to the next article to learn how to integrate a NAT gateway with an Azure Load Balancer: > [!div class="nextstepaction"]
-> [Integrate NAT gateway with an internal load balancer](tutorial-nat-gateway-load-balancer-internal-portal.md)
+> [Integrate NAT gateway with an internal load balancer](tutorial-nat-gateway-load-balancer-internal-portal.md)
virtual-network Tutorial Hub Spoke Route Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-hub-spoke-route-nat.md
Title: 'Tutorial: Use a NAT gateway with a hub and spoke network'-+ description: Learn how to integrate a NAT gateway into a hub and spoke network with a network virtual appliance.
A hub and spoke network is one of the building blocks of a highly available multiple location network infrastructure. The most common deployment of a hub and spoke network is done with the intention of routing all inter-spoke and outbound internet traffic through the central hub. The purpose is to inspect all of the traffic traversing the network with a Network Virtual Appliance (NVA) for security scanning and packet inspection.
-For outbound traffic to the internet, the network virtual appliance would typically have one network interface with an assigned public IP address. The NVA after inspecting the outbound traffic forwards the traffic out the public interface and to the internet. Azure Virtual Network NAT eliminates the need for the public IP address assigned to the NVA. Associating a NAT gateway with the public subnet of the NVA changes the routing for the public interface to route all outbound internet traffic through the NAT gateway. The elimination of the public IP address increases security and allows for the scaling of outbound source network address translation (SNAT) with multiple public IP addresses and or public IP prefixes.
+For outbound traffic to the internet, the network virtual appliance would typically have one network interface with an assigned public IP address. The NVA after inspecting the outbound traffic forwards the traffic out the public interface and to the internet. Azure NAT Gateway eliminates the need for the public IP address assigned to the NVA. Associating a NAT gateway with the public subnet of the NVA changes the routing for the public interface to route all outbound internet traffic through the NAT gateway. The elimination of the public IP address increases security and allows for the scaling of outbound source network address translation (SNAT) with multiple public IP addresses and or public IP prefixes.
> [!IMPORTANT] > The NVA used in this article is for demonstration purposes only and is simulated with an Ubuntu virtual machine. The solution doesn't include a load balancer for high availability of the NVA deployment. Replace the Ubuntu virtual machine in this article with an NVA of your choice. Consult the vendor of the chosen NVA for routing and configuration instructions. A load balancer and availability zones is recommended for a highly available NVA infrastructure.
If you're not going to continue to use this application, delete the created reso
Advance to the next article to learn how to use an Azure Gateway Load Balancer for highly available network virtual appliances: > [!div class="nextstepaction"]
-> [Gateway Load Balancer](../../load-balancer/gateway-overview.md)
+> [Gateway Load Balancer](../../load-balancer/gateway-overview.md)
virtual-network Tutorial Migrate Ilip Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-migrate-ilip-nat.md
Title: 'Tutorial: Migrate a virtual machine public IP address to NAT gateway'-
-description: Learn how to migrate your virtual machine public IP to a Virtual Network NAT gateway.
+
+description: Learn how to migrate your virtual machine public IP to a NAT gateway.
Last updated 5/25/2022
-# Tutorial: Migrate a virtual machine public IP address to Azure Virtual Network NAT
+# Tutorial: Migrate a virtual machine public IP address to Azure NAT Gateway
In this article, you'll learn how to migrate your virtual machine's public IP address to a NAT gateway. You'll learn how to remove the IP address from the virtual machine. You'll reuse the IP address from the virtual machine for the NAT gateway.
-Azure Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as default outbound access. A NAT gateway replaces the need for a virtual machine to have a public IP address to have outbound connectivity.
+Azure NAT Gateway is the recommended method for outbound connectivity. Azure NAT Gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as default outbound access. A NAT gateway replaces the need for a virtual machine to have a public IP address to have outbound connectivity.
-For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT](nat-overview.md)
+For more information about Azure NAT Gateway, see [What is Azure NAT Gateway](nat-overview.md)
In this tutorial, you learn how to:
In this section, you'll learn how to remove the public IP address from the virtu
### (Optional) Upgrade IP address
-The NAT gateway resource in Azure Virtual Network NAT requires a standard SKU public IP address. In this section, you'll upgrade the IP you removed from the virtual machine in the previous section. If the IP address you removed is already a standard SKU public IP, you can proceed to the next section.
+The NAT gateway resource requires a standard SKU public IP address. In this section, you'll upgrade the IP you removed from the virtual machine in the previous section. If the IP address you removed is already a standard SKU public IP, you can proceed to the next section.
1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses**.
In this article, you learned how to:
Any virtual machine created within this subnet won't require a public IP address and will automatically have outbound connectivity. For more information about NAT gateway and the connectivity benefits it provides, see [Design virtual networks with NAT gateway](nat-gateway-resource.md).
-Advance to the next article to learn how to migrate default outbound access to Azure Virtual Network NAT:
+Advance to the next article to learn how to migrate default outbound access to Azure NAT Gateway:
> [!div class="nextstepaction"] > [Migrate outbound access to NAT gateway](tutorial-migrate-outbound-nat.md)
virtual-network Tutorial Migrate Outbound Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-migrate-outbound-nat.md
Title: 'Tutorial: Migrate outbound access to NAT gateway'-
-description: Learn how to migrate outbound access in your virtual network to a Virtual Network NAT gateway.
+
+description: Learn how to migrate outbound access in your virtual network to a NAT gateway.
Last updated 5/25/2022
-# Tutorial: Migrate outbound access to Azure Virtual Network NAT
+# Tutorial: Migrate outbound access to Azure NAT Gateway
In this article, you'll learn how to migrate your outbound connectivity from [default outbound access](../ip-services/default-outbound-access.md) to a NAT gateway. You'll learn how to change your outbound connectivity from load balancer outbound rules to a NAT gateway. You'll reuse the IP address from the outbound rule configuration for the NAT gateway.
-Azure Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as default outbound access. A NAT gateway replaces the need for outbound rules in a load balancer for outbound connectivity.
+Azure NAT Gateway is the recommended method for outbound connectivity. A NAT gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as default outbound access. A NAT gateway replaces the need for outbound rules in a load balancer for outbound connectivity.
-For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT](nat-overview.md)
+For more information about Azure NAT Gateway, see [What is Azure NAT Gateway](nat-overview.md)
In this tutorial, you learn how to:
In this tutorial, you learn how to:
* The load balancer name used in the examples is **myLoadBalancer**. > [!NOTE]
-> Virtual Network NAT provides outbound connectivity for standard internal load balancers. For more information on integrating a NAT gateway with your internal load balancers, see [Tutorial: Integrate a NAT gateway with an internal load balancer using Azure portal](tutorial-nat-gateway-load-balancer-internal-portal.md).
+> Azure NAT Gateway provides outbound connectivity for standard internal load balancers. For more information on integrating a NAT gateway with your internal load balancers, see [Tutorial: Integrate a NAT gateway with an internal load balancer using Azure portal](tutorial-nat-gateway-load-balancer-internal-portal.md).
## Migrate default outbound access
virtual-network Tutorial Nat Gateway Load Balancer Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md
Title: 'Tutorial: Integrate NAT gateway with an internal load balancer - Azure portal'-
-description: In this tutorial, learn how to integrate a Virtual Network NAT gateway with an internal load Balancer using the Azure portal.
+
+description: In this tutorial, learn how to integrate a NAT gateway with an internal load Balancer using the Azure portal.
the virtual network, virtual machine, and NAT gateway with the following steps:
## Next steps
-For more information on Azure Virtual Network NAT, see:
+For more information on Azure NAT Gateway, see:
> [!div class="nextstepaction"]
-> [Virtual Network NAT overview](nat-overview.md)
+> [Azure NAT Gateway overview](nat-overview.md)
virtual-network Tutorial Nat Gateway Load Balancer Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md
Title: 'Tutorial: Integrate NAT gateway with a public load balancer - Azure portal'-
-description: In this tutorial, learn how to integrate a Virtual Network NAT gateway with a public load Balancer using the Azure portal.
+
+description: In this tutorial, learn how to integrate a NAT gateway with a public load Balancer using the Azure portal.
the virtual network, virtual machine, and NAT gateway with the following steps:
## Next steps
-For more information on Azure Virtual Network NAT, see:
+For more information on Azure NAT Gateway, see:
> [!div class="nextstepaction"]
-> [Virtual Network NAT overview](nat-overview.md)
+> [Azure NAT Gateway overview](nat-overview.md)
virtual-network Tutorial Protect Nat Gateway Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-protect-nat-gateway-ddos.md
Title: 'Tutorial: Protect your NAT gateway with Azure DDoS Protection Standard'-+ description: Learn how to create an NAT gateway in an Azure DDoS Protection Standard protected virtual network.
Last updated 01/24/2022
# Tutorial: Protect your NAT gateway with Azure DDoS Protection Standard
-This article helps you create an Azure Virtual Network NAT gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your NAT gateway from large scale DDoS attacks.
+This article helps you create a NAT gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your NAT gateway from large scale DDoS attacks.
> [!IMPORTANT] > Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../../ddos-protection/ddos-protection-overview.md).
the virtual network, virtual machine, and NAT gateway with the following steps:
## Next steps
-For more information on Azure Virtual Network NAT, see:
+For more information on Azure NAT Gateway, see:
> [!div class="nextstepaction"]
-> [Virtual Network NAT overview](nat-overview.md)
+> [Azure NAT Gateway overview](nat-overview.md)
virtual-wan Scenario Isolate Vnets Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets-custom.md
When working with Virtual WAN virtual hub routing, there are quite a few availab
## <a name="design"></a>Design
-In order to figure out how many route tables will be needed, you can build a connectivity matrix. For this scenario it will look like the following, where each cell represents whether a source (row) can communicate to a destination (column):
+In order to figure out how many route tables are needed, you can build a connectivity matrix. For this scenario it looks like the following, where each cell represents whether a source (row) can communicate to a destination (column):
| From | To:| *Blue VNets* | *Red VNets* | *Branches*| ||||||
In order to figure out how many route tables will be needed, you can build a con
| **Red VNets** | &#8594;| | Direct | Direct | | **Branches** | &#8594;| Direct | Direct | Direct |
-Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination (the "To" side of the flow, the column headers in italics). In this scenario there are no firewalls or Network Virtual Appliances, so communications flows directly over Virtual WAN (hence the word "Direct" in the table).
+Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination (the "To" side of the flow, the column headers in italics). In this scenario, there are no firewalls or Network Virtual Appliances, so communications flows directly over Virtual WAN (hence the word "Direct" in the table).
-The number of different row patterns will be the number of route tables we will need in this scenario. In this case, three route route tables that we will call **RT_BLUE** and **RT_RED** for the virtual networks, and **Default** for the branches. Remember, the branches always have to be associated to the Default routing table.
+The number of different row patterns are the number of route tables we need in this scenario. In this case, three route tables that we call are **RT_BLUE** and **RT_RED** for the virtual networks, and **Default** for the branches. Remember, the branches always have to be associated to the Default routing table.
-The branches will need to learn the prefixes from both Red and Blue VNets, so all VNets will need to propagate to Default (additionally to either **RT_BLUE** or **RT_RED**). Blue and Red VNets will need to learn the branches prefixes, so branches will propagate to both route tables **RT_BLUE** and **RT_RED** too. As a result, this is the final design:
+The branches need to learn the prefixes from both Red and Blue VNets, so all VNets needs to propagate to Default (additionally to either **RT_BLUE** or **RT_RED**). Blue and Red VNets need to learn the branches' prefixes, so branches propagate to both route tables **RT_BLUE** and **RT_RED** too. As a result, this is the final design:
* Blue virtual networks: * Associated route table: **RT_BLUE**
The branches will need to learn the prefixes from both Red and Blue VNets, so al
> Since all branches need to be associated to the Default route table, as well as to propagate to the same set of routing tables, all branches will have the same connectivity profile. In other words, the Red/Blue concept for VNets cannot be applied to branches. > [!NOTE]
-> If your Virtual WAN is deployed over multiple regions, you will need to create the **RT_BLUE** and **RT_RED** route tables in every hub, and routes from each VNet connection need to be propagated to the route tables in every virtual hub using propagation labels.
+> If your Virtual WAN is deployed over multiple hubs, you need to create the **RT_BLUE** and **RT_RED** route tables in every hub, and routes from each VNet connection need to be propagated to the route tables in every virtual hub using propagation labels.
For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
For more information about virtual hub routing, see [About virtual hub routing](
In **Figure 1**, there are Blue and Red VNet connections.
-* Blue-connected VNets can reach each other, as well as reach all branches (VPN/ER/P2S) connections.
-* Red VNets can reach each other, as well as reach all branches (VPN/ER/P2S) connections.
+* Blue-connected VNets can reach each other and reach all branches (VPN/ER/P2S) connections.
+* Red VNets can reach each other and reach all branches (VPN/ER/P2S) connections.
Consider the following steps when setting up routing. 1. Create two custom route tables in the Azure portal, **RT_BLUE** and **RT_RED**. 2. For route table **RT_BLUE**, for the following settings: * **Association**: Select all Blue VNets.
- * **Propagation**: For Branches, select the option for branches, implying branch(VPN/ER/P2S) connections will propagate routes to this route table.
+ * **Propagation**: For Branches, select the option for branches, implying branch(VPN/ER/P2S) connections propagate routes to this route table.
3. Repeat the same steps for **RT_RED** route table for Red VNets and branches (VPN/ER/P2S).
-This will result in the routing configuration changes as seen the figure below
+This results in the routing configuration change as seen in the following figure.
**Figure 1**
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Title: 'Azure Virtual WAN FAQ | Microsoft Docs'
+ Title: 'Azure Virtual WAN FAQ'
description: See answers to frequently asked questions about Azure Virtual WAN networks, clients, gateways, devices, partners, and connections.
### Is Azure Virtual WAN in GA?
-Yes, Azure Virtual WAN is Generally Available (GA). However, Virtual WAN consists of several features and scenarios. There are feature or scenarios within Virtual WAN where Microsoft applies the Preview tag. In those cases, the specific feature, or the scenario itself, is in Preview. If you don't use a specific preview feature, regular GA support applies. For more information about Preview support, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Yes, Azure Virtual WAN is Generally Available (GA). However, Virtual WAN consists of several features and scenarios. There are features or scenarios within Virtual WAN where Microsoft applies the Preview tag. In those cases, the specific feature, or the scenario itself, is in Preview. If you don't use a specific preview feature, regular GA support applies. For more information about Preview support, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
### Which locations and regions are available?
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case.
+Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case.
YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-howto.md
description: Learn how to configure BGP for Azure VPN Gateway using the Azure po
Previously updated : 01/09/2023 Last updated : 04/20/2023
To get the Azure BGP Peer IP address:
1. Go to the virtual network gateway resource and select the **Configuration** page to see the BGP configuration information. 1. Make a note of the BGP Peer IP address.
-## <a name ="crosspremises"></a>Configure BGP on cross-premises S2S connections
+## <a name ="crosspremises"></a>To configure BGP on cross-premises S2S connections
+
+The instructions in this section apply to cross-premises site-to-site configurations.
To establish a cross-premises connection, you need to create a *local network gateway* to represent your on-premises VPN device, and a *connection* to connect the VPN gateway with the local network gateway as explained in [Create site-to-site connection](tutorial-site-to-site-portal.md). The following sections contain the additional properties required to specify the BGP configuration parameters, as shown in Diagram 3.
The following example lists the parameters you enter into the BGP configuration
- eBGP Multihop : Ensure the "multihop" option for eBGP is enabled on your device if needed ```
-## Enable BGP on VNet-to-VNet connections
+## To enable BGP on VNet-to-VNet connections
+
+The steps in this section apply to VNet-to-VNet connections.
-The steps to enable or disable BGP on a VNet-to-VNet connection are the same as the [S2S steps](#crosspremises). You can enable BGP when creating the connection, or update the configuration on an existing VNet-to-VNet connection.
+To enable or disable BGP on a VNet-to-VNet connection, you use the same steps as the [S2S cross-premises steps](#crosspremises) in the previous section. You can enable BGP when creating the connection, or update the configuration on an existing VNet-to-VNet connection.
> [!NOTE] > A VNet-to-VNet connection without BGP will limit the communication to the two connected VNets only. Enable BGP to allow transit routing capability to other S2S or VNet-to-VNet connections of these two VNets.
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Last updated 04/12/2023
This tutorial helps you create and manage an Azure VPN gateway using the Azure portal. You can also create and manage a gateway using [Azure CLI](create-routebased-vpn-gateway-cli.md) or [Azure PowerShell](create-routebased-vpn-gateway-powershell.md). If you want to learn more about the configuration settings used in this tutorial, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). For more information about VPN Gateway, see [What is VPN Gateway?](vpn-gateway-about-vpngateways.md) + In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
> * Resize a VPN gateway (resize SKU) > * Reset a VPN gateway
-The following diagram shows the virtual network and the VPN gateway created as part of this tutorial.
-- ## Prerequisites An Azure account with an active subscription. If you don't have one, [create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
vpn-gateway Tutorial Protect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-protect-vpn-gateway.md
In this tutorial, you learn how to:
The following diagram shows the virtual network and the VPN gateway created as part of this tutorial. ## Prerequisites
web-application-firewall Waf Front Door Rate Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit.md
Previously updated : 09/07/2022 Last updated : 04/20/2023 # What is rate limiting for Azure Front Door Service?
-Rate limiting enables you to detect and block abnormally high levels of traffic from any socket IP address. The socket IP address is the address of the client that initiated the TCP connection to Front Door. Typically, the socket IP address is the IP address of the user, but it might also be the IP address of a proxy server or another device that sits between the user and Front Door. By using the web application firewall (WAF) with Azure Front Door, you can mitigate some types of denial of service attacks. Rate limiting also protects you against clients that have accidentally been misconfigured to send large volumes of requests in a short time period.
+Rate limiting enables you to detect and block abnormally high levels of traffic from any socket IP address. The socket IP address is the address of the client that initiated the TCP connection to Front Door. Typically, the socket IP address is the IP address of the user, but it might also be the IP address of a proxy server or another device that sits between the user and the Front Door. By using the web application firewall (WAF) with Azure Front Door, you can mitigate some types of denial of service attacks. Rate limiting also protects you against clients that have accidentally been misconfigured to send large volumes of requests in a short time period.
-Rate limits are applied at the socket IP address level. If you have multiple clients accessing your Front Door from different socket IP addresses, they'll each have their own rate limits applied. The socket IP address is the source IP address WAF sees. If your user is behind a proxy, socket IP address is often the proxy server address.
+Rate limits can be defined at the socket IP address level or the remote address level. If you have multiple clients accessing your Front Door from different socket IP addresses, they'll each have their own rate limits applied. The socket IP address is the source IP address the WAF sees. If your user is behind a proxy, socket IP address is often the proxy server address. Remote address is the original client IP that is usually sent via the X-Forwarded-For request header.
## Configure a rate limit policy
The match condition above identifies all requests with a `Host` header of length
## Rate limits and Front Door servers
-Requests from the same client often arrive at the same Front Door server. In that case, you'll see requests are blocked as soon as the rate limit is reached for each socket IP address.
+Requests from the same client often arrive at the same Front Door server. In that case, you see requests are blocked as soon as the rate limit is reached for each of the client IP addresses.
-However, it's possible that requests from the same client might arrive at a different Front Door server that hasn't refreshed the rate limit counter yet. For example, the client might open a new TCP connection for each request. If the threshold is low enough, the first request to the new Front Door server could pass the rate limit check. So, for a very low threshold (for example, less than about 50 requests per minute), you might see some requests above the threshold get through.
+However, it's possible that requests from the same client might arrive at a different Front Door server that hasn't refreshed the rate limit counter yet. For example, the client might open a new TCP connection for each request. If the threshold is low enough, the first request to the new Front Door server could pass the rate limit check. So, for a low threshold (for example, less than about 100 requests per minute), you might see some requests above the threshold get through. Larger time window sizes (for example, 5 minutes over 1 minute) with larger thresholds are typically more effective than the shorter time window sizes with lower thresholds.
## Next steps