Updates from: 09/16/2022 01:09:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Content-type: application/json
| -- | -- | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. |
-| \<builtInUserAttribute> | \<attribute-type> | No | They can returned in the token if selected as an **Application claim**. |
+| \<builtInUserAttribute> | \<attribute-type> | No | They can be returned in the token if selected as an **Application claim**. |
| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can returned in the token if selected as an **Application claim**. | ::: zone-end
active-directory Scim Validator Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-validator-tutorial.md
+
+ Title: Tutorial - Test your SCIM endpoint for compatibility with the Azure Active Directory (Azure AD) provisioning service.
+description: This tutorial describes how to use the Azure AD SCIM Validator to validate that your provisioning server is compatible with the Azure SCIM client.
+++++++ Last updated : 09/13/2022+++++
+# Tutorial: Validate a SCIM endpoint
+
+This tutorial describes how to use the Azure AD SCIM Validator to validate that your provisioning server is compatible with the Azure SCIM client. The tutorial is intended for developers who want to build a SCIM compatible server to manage their identities with the Azure AD provisioning service.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Select a testing method
+> * Configure the testing method
+> * Validate your SCIM endpoint
+
+## Prerequisites
+
+- An Azure Active Directory account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A SCIM endpoint that conforms to the SCIM 2.0 standard and meets the provision service requirements. To learn more, see [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md).
++
+## Select a testing method
+The first step is to select a testing method to validate your SCIM endpoint.
+
+1. Open your web browser and navigate to the SCIM Validator: [https://scimvalidator.microsoft.com/](https://scimvalidator.microsoft.com/).
+1. Select one of the three test options. You can use default attributes, automatically discover the schema, or upload a schema.
+
+ :::image type="content" source="./media/scim-validator-tutorial/scim-validator.png" alt-text="Screenshot of SCIM Validator main page." lightbox="./media/scim-validator-tutorial/scim-validator.png":::
+
+**Use default attributes** - The system provides the default attributes, and you modify them to meet your need.
+
+**Discover schema** - If your end point supports /Schema, this option will allow the tool to discover the supported attributes. We recommend this option as it reduces the overhead of updating your app as you build it out.
+
+**Upload Azure AD Schema** - Upload the schema you've downloaded from your sample app on Azure AD.
++
+## Configure the testing method
+Now that you've selected a testing method, the next step is to configure it.
++
+1. If you're using the default attributes option, then fill in all of the indicated fields.
+2. If you're using the discover schema option, then enter the SCIM endpoint URL and token.
+3. If you're uploading a schema, then select your .json file to upload. The option accepts a .json file exported from your sample app on the Azure portal. To learn how to export a schema, see [How-to: Export provisioning configuration and roll back to a known good state](export-import-provisioning-configuration.md#export-your-provisioning-configuration).
+> [!NOTE]
+> To test *group attributes*, make sure to select **Enable Group Tests**.
+
+4. Edit the list attributes as desired for both the user and group types using the ΓÇÿAdd AttributeΓÇÖ option at the end of the attribute list and minus (-) sign on the right side of the page.
+5. Select the joining property from both the user and group attributes list.
+> [!NOTE]
+> The joining property, also known as matching attribute, is an attribute that user and group resources can be uniquely queried on at the source and matched in the target system.
++
+## Validate your SCIM endpoint
+Finally, you need to test and validate your endpoint.
+
+1. Select **Test Schema** to begin the test.
+1. Review the results with a summary of passed and failed tests.
+1. Select the **show details** tab and review and fix issues.
+1. Continue to test your schema until all tests pass.
+
+ :::image type="content" source="./media/scim-validator-tutorial/scim-validator-results.png" alt-text="Screenshot of SCIM Validator results page." lightbox="./media/scim-validator-tutorial/scim-validator-results.png":::
+
+### Use Postman to test endpoints (optional)
+
+In addition to using the SCIM Validator tool, you can also use Postman to validate an endpoint. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
+
+The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
+
+> [!NOTE]
+> You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
+
+1. Download [Postman](https://www.getpostman.com/downloads/) and start the application.
+1. Copy and paste this link into Postman to import the test collection: `https://aka.ms/ProvisioningPostman`.
+
+ ![Screenshot that shows importing the test collection in Postman.](media/scim-validator-tutorial/postman-collection.png)
+
+1. Create a test environment that has these variables:
+
+ |Environment|Variable|Value|
+ |-|-|-|
+ |Run the project locally by using IIS Express|||
+ ||**Server**|`localhost`|
+ ||**Port**|`:44359` *(don't forget the **`:`**)*|
+ ||**Api**|`scim`|
+ |Run the project locally by using Kestrel|||
+ ||**Server**|`localhost`|
+ ||**Port**|`:5001` *(don't forget the **`:`**)*|
+ ||**Api**|`scim`|
+ |Host the endpoint in Azure|||
+ ||**Server**|*(input your SCIM URL)*|
+ ||**Port**|*(leave blank)*|
+ ||**Api**|`scim`|
+
+1. Use **Get Key** from the Postman collection to send a **GET** request to the token endpoint and retrieve a security token to be stored in the **token** variable for subsequent requests.
+
+ ![Screenshot that shows the Postman Get Key folder.](media/scim-validator-tutorial/postman-get-key.png)
+
+ > [!NOTE]
+ > To make a SCIM endpoint secure, you need a security token before you connect. The tutorial uses the `{host}/scim/token` endpoint to generate a self-signed token.
+
+That's it! You can now run the **Postman** collection to test the SCIM endpoint functionality.
+
+## Clean up resources
+
+If you created any Azure resources in your testing that are no longer needed, don't forget to delete them.
+
+## Known Issues with Azure AD SCIM Validator
+
+- Soft deletes (disables) arenΓÇÖt yet supported.
+- The time zone format is randomly generated and will fail for systems that try to validate it.
+- The preferred language format is randomly generated and will fail for systems that try to validate it.
+- The patch user remove attributes may attempt to remove mandatory/required attributes for certain systems. Such failures should be ignored.
++
+## Next steps
+- [Learn how to add an app that is not in the Azure AD app gallery](../manage-apps/overview-application-gallery.md)
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
The default token validation code is configured to use an Azure AD token and req
} ```
-### Use Postman to test endpoints
-
-After you deploy the SCIM endpoint, you can test to ensure that it's compliant with SCIM RFC. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
-
-The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
-
-> [!NOTE]
-> You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
-
-1. Download [Postman](https://www.getpostman.com/downloads/) and start the application.
-1. Copy and paste this link into Postman to import the test collection: `https://aka.ms/ProvisioningPostman`.
-
- ![Screenshot that shows importing the test collection in Postman.](media/use-scim-to-build-users-and-groups-endpoints/postman-collection.png)
-
-1. Create a test environment that has these variables:
-
- |Environment|Variable|Value|
- |-|-|-|
- |Run the project locally by using IIS Express|||
- ||**Server**|`localhost`|
- ||**Port**|`:44359` *(don't forget the **`:`**)*|
- ||**Api**|`scim`|
- |Run the project locally by using Kestrel|||
- ||**Server**|`localhost`|
- ||**Port**|`:5001` *(don't forget the **`:`**)*|
- ||**Api**|`scim`|
- |Host the endpoint in Azure|||
- ||**Server**|*(input your SCIM URL)*|
- ||**Port**|*(leave blank)*|
- ||**Api**|`scim`|
-
-1. Use **Get Key** from the Postman collection to send a **GET** request to the token endpoint and retrieve a security token to be stored in the **token** variable for subsequent requests.
-
- ![Screenshot that shows the Postman Get Key folder.](media/use-scim-to-build-users-and-groups-endpoints/postman-get-key.png)
-
- > [!NOTE]
- > To make a SCIM endpoint secure, you need a security token before you connect. The tutorial uses the `{host}/scim/token` endpoint to generate a self-signed token.
-
-That's it! You can now run the **Postman** collection to test the SCIM endpoint functionality.
- ## Next steps To develop a SCIM-compliant user and group endpoint with interoperability for a client, see [SCIM client implementation](http://www.simplecloud.info/#Implementations2).
-> [!div class="nextstepaction"]
-> [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
-> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
+- [Tutorial: Validate a SCIM endpoint](scim-validator-tutorial.md)
+- [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
+- [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Previously updated : 09/13/2022 Last updated : 09/15/2022
Here are some factors for you to consider when choosing Microsoft passwordless t
||**Windows Hello for Business**|**Passwordless sign-in with the Authenticator app**|**FIDO2 security keys**| |:-|:-|:-|:-|
-|**Pre-requisite**| Windows 10, version 1809 or later<br>Azure Active Directory| Authenticator app<br>Phone (iOS and Android devices running Android 8.0 or above.)|Windows 10, version 1903 or later<br>Azure Active Directory|
+|**Pre-requisite**| Windows 10, version 1809 or later<br>Azure Active Directory| Authenticator app<br>Phone (iOS and Android devices)|Windows 10, version 1903 or later<br>Azure Active Directory|
|**Mode**|Platform|Software|Hardware| |**Systems and devices**|PC with a built-in Trusted Platform Module (TPM)<br>PIN and biometrics recognition |PIN and biometrics recognition on phone|FIDO2 security devices that are Microsoft compatible| |**User experience**|Sign in using a PIN or biometric recognition (facial, iris, or fingerprint) with Windows devices.<br>Windows Hello authentication is tied to the device; the user needs both the device and a sign-in component such as a PIN or biometric factor to access corporate resources.|Sign in using a mobile phone with fingerprint scan, facial or iris recognition, or PIN.<br>Users sign in to work or personal account from their PC or mobile phone.|Sign in using FIDO2 security device (biometrics, PIN, and NFC)<br>User can access device based on organization controls and authenticate based on PIN, biometrics using devices such as USB security keys and NFC-enabled smartcards, keys, or wearables.|
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 09/13/2022 Last updated : 09/15/2022
The Azure AD accounts can be in the same tenant or different tenants. Guest acco
To use passwordless phone sign-in with Microsoft Authenticator, the following prerequisites must be met: - Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity. -- Latest version of Microsoft Authenticator installed on devices running iOS 12.0 or greater, or Android 8.0 or greater.
+- Latest version of Microsoft Authenticator installed on devices running iOS or Android.
- For Android, the device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android. - For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in: - balas@contoso.com
active-directory Tutorial Enable Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-azure-mfa.md
In this tutorial you learn how to:
To complete this tutorial, you need the following resources and privileges:
-* A working Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled.
+* A working Azure AD tenant with Azure AD Premium P1 or trial licenses enabled.
* If you need to, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An account with *Conditional Access Administrator*, *Security Administrator*, or *Global Administrator* privileges. Some MFA settings can also be managed by an *Authentication Policy Administrator*. For more information, see [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
Title: View integration information about an authorization system in Permissions Management description: View integration information about an authorization system in Permissions Management. --++ Last updated 02/23/2022-++ # View integration information about an authorization system
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
To view or copy BitLocker keys, you need to be the owner of the device or have o
- Security Reader ## Block users from viewing their BitLocker keys (preview)
-In this preivew, admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices.
+In this preview, admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices.
To disable/enable self-service BitLocker recovery:
You must be assigned one of the following roles to view or manage device setting
- **Additional local administrators on Azure AD joined devices**: This setting allows you to select the users who are granted local administrator rights on a device. These users are added to the Device Administrators role in Azure AD. Global Administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products like Azure AD Premium and Enterprise Mobility + Security.-- **Users may register their devices with Azure AD**: You need to configure this setting to allow users to register Windows 10 or newer personal, iOS, Android, and macOS devices with Azure AD. If you select **None**, devices aren't allowed to register with Azure AD. Enrollment with Microsoft Intune or mobile device management for Microsoft 365 requires registration. If you've configured either of these services, **ALL** is selected and **NONE** is unavailable.-- **Require Multi-Factor Authentication to register or join devices with Azure AD**: This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
+- **Users may register their devices with Azure AD**: You need to configure this setting to allow users to register Windows 10 or newer personal, iOS, Android, and macOS devices with Azure AD. If you select **None**, devices aren't allowed to register with Azure AD. Enrollment with Microsoft Intune or mobile device management for Microsoft 365 requires registration. If you've configured either of these services, **ALL** is selected, and **NONE** is unavailable.
+- **Require Multi-Factor Authentication to register or join devices with Azure AD**:
+ - We recommend organizations use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication. You must configure this toggle to **No** if you use a Conditional Access policy to require multifactor authentication.
+ - This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
> [!NOTE] > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
- > [!IMPORTANT]
- > - We recommend that you use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication for joining or registering a device.
- > - You must configure this setting to **No** if you're using Conditional Access policy to require multifactor authentication.
- - **Maximum number of devices**: This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Azure AD will set it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits. > [!NOTE]
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
# Azure Active Directory security operations guide for Applications
-Applications provide an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Since applications often run without human intervention, the attacks may be harder to detect.
+Applications have an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Because applications often run without human intervention, the attacks may be harder to detect.
-This article provides guidance to monitor and alert on application events and helps enable you to:
+This article provides guidance to monitor and alert on application events. It's regularly updated to help ensure you:
-* Prevent malicious applications from getting unwarranted access to data.
+* Prevent malicious applications from getting unwarranted access to data
-* Prevent existing applications from being compromised by bad actors.
+* Prevent applications from being compromised by bad actors
-* Gather insights that enable you to build and configure new applications more securely.
+* Gather insights that enable you to build and configure new applications more securely
If you're unfamiliar with how applications work in Azure Active Directory (Azure AD), see [Apps and service principals in Azure AD](../develop/app-objects-and-service-principals.md).
If you're unfamiliar with how applications work in Azure Active Directory (Azure
## What to look for
-As you monitor your application logs for security incidents, review the following to help differentiate normal activity from malicious activity. The following events may indicate security concerns and each are covered in the rest of the article.
+As you monitor your application logs for security incidents, review the following list to help differentiate normal activity from malicious activity. The following events might indicate security concerns. Each is covered in the article.
-* Any changes occurring outside of normal business processes and schedules.
+* Any changes occurring outside normal business processes and schedules
* Application credentials changes * Application permissions
- * Service principal assigned to an Azure AD or Azure RBAC role.
+ * Service principal assigned to an Azure AD or an Azure role-based access control (RBAC) role
- * Applications that are granted highly privileged permissions.
+ * Applications granted highly privileged permissions
- * Azure Key Vault changes.
+ * Azure Key Vault changes
- * End user granting applications consent.
+ * End user granting applications consent
- * Stopped end user consent based on level of risk.
+ * Stopped end-user consent based on level of risk
* Application configuration changes
- * Universal resource identifier (URI) changed or non-standard.
+ * Universal resource identifier (URI) changed or non-standard
- * Changes to application owners.
+ * Changes to application owners
- * Logout URLs modified.
+ * Log-out URLs modified
## Where to look
The log files you use for investigation and monitoring are:
* [Azure Key Vault logs](../../key-vault/general/logging.md)
-From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools, which allow more automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level with security information and event management (SIEM) capabilities.
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where there are Sigma templates for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
-Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - detects risk on workload identities across sign-in behavior and offline indicators of compromise.
- The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+Much of what you monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies, including device state. Use the workbook to view a summary, and identify the effects over a time period. You can use the workbook to investigate the sign-ins of a specific user.
-## Application credentials
+The remainder of this article is what we recommend you monitor and alert on. It's organized by the type of threat. Where there are pre-built solutions, we link to them or provide samples after the table. Otherwise, you can build alerts using the preceding tools.
-Many applications use credentials to authenticate in Azure AD. Any additional credentials added outside of expected processes could be a malicious actor using those credentials. We strongly recommend using X509 certificates issued by trusted authorities or Managed Identities instead of using client secrets. However, if you need to use client secrets, follow good hygiene practices to keep applications safe. Note, application and service principal updates are logged as two entries in the audit log.
+## Application credentials
-* Monitor applications to identify those with long credential expiration times.
+Many applications use credentials to authenticate in Azure AD. Any other credentials added outside expected processes could be a malicious actor using those credentials. We recommend using X509 certificates issued by trusted authorities or Managed Identities instead of using client secrets. However, if you need to use client secrets, follow good hygiene practices to keep applications safe. Note, application and service principal updates are logged as two entries in the audit log.
-* Replace long-lived credentials with credentials that have a short life span. Take steps to ensure that credentials don't get committed in code repositories and are stored securely.
+* Monitor applications to identify long credential expiration times.
+* Replace long-lived credentials with a short life span. Ensure credentials don't get committed in code repositories, and are stored securely.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | -|-|-|-|-|
-| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are:<li> added outside of normal business hours or workflows.<li> of types not used in your environment.<li> added to a non-SAML flow supporting service principal. |
-| Credentials with a lifetime longer than your policies allow.| Medium| Microsoft Graph| State and end date of Application Key credentials<br>-and-<br>Application password credentials| You can use MS Graph API to find the start and end date of credentials, and evaluate those with a longer than allowed lifetime. See PowerShell script following this table. |
+| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Credentials with a lifetime longer than your policies allow.| Medium| Microsoft Graph| State and end date of Application Key credentials<br>-and-<br>Application password credentials| You can use MS Graph API to find the start and end date of credentials, and evaluate longer-than-allowed lifetimes. See PowerShell script following this table. |
- The following pre-built monitoring and alerts are available.
+ The following pre-built monitoring and alerts are available:
-* Microsoft Sentinel ΓÇô [Alert when new app or service principle credentials added](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)
+* Microsoft Sentinel ΓÇô [Alert when new app or service principle credentials added](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)
* Azure Monitor ΓÇô [Azure AD workbook to help you assess Solorigate risk - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718)
Many applications use credentials to authenticate in Azure AD. Any additional cr
## Application permissions
-Like an administrator account, applications can be assigned privileged roles. Apps can be assigned Azure AD roles, such as global administrator, or Azure RBAC roles such as subscription owner. Because they can run without a user present and as a background service, closely monitor anytime an application is granted a highly privileged role or permission.
+Like an administrator account, applications can be assigned privileged roles. Apps can be assigned Azure AD roles, such as Global Administrator, or Azure RBAC roles such as Subscription Owner. Because they can run without a user, and as a background service, closely monitor when an application is granted a highly privileged role or permission.
### Service principal assigned to a role - | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures. |
+| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedPrivilegedRole.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Application granted highly privileged permissions
-Applications should also follow the principal of least privilege. Investigate application permissions to ensure they're truly needed. You can create an [app consent grant report](https://aka.ms/getazureadpermissions) to help identify existing applications and highlight privileged permissions.
+Applications should follow the principle of least privilege. Investigate application permissions to ensure they're needed. You can create an [app consent grant report](https://aka.ms/getazureadpermissions) to help identify applications and highlight privileged permissions.
| What to monitor|Risk Level|Where| Filter/sub-filter| Notes| |-|-|-|-|-|
-| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*) |
-| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures. |
-| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row. |
-| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row. |
-| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row. |
+| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/MailPermissionsAddedToApplication.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row.<br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/SuspiciousOAuthApp_OfflineAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on monitoring app permissions, see this tutorial: [Investigate and remediate risky OAuth apps](/cloud-app-security/investigate-risky-oauth). ### Azure Key Vault
-Azure Key Vault can be used to store your tenantΓÇÖs secrets. We recommend you pay particular attention to any changes to Key Vault configuration and activities.
+Use Azure Key Vault to store your tenantΓÇÖs secrets. We recommend you pay attention to any changes to Key Vault configuration and activities.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for <li> any access to Key Vault outside of regular processes and hours. <li> any changes to Key Vault ACL. |
+| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-After setting up Azure Key Vault, be sure to [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli), which shows [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone call, text message, or [event grid](../../key-vault/general/event-grid-overview.md) notification if health is impacted. Additionally, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights will give you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
+After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli). See [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone, text, or [Event Grid](../../key-vault/general/event-grid-overview.md) notification, if health is affected. In addition, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights gives you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
### End-user consent | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: <li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
+| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: high profile or highly privileged accounts, app requests high-risk permissions, apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/ConsentToApplicationDiscovery.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-
-The act of consenting to an application is not in itself malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
+The act of consenting to an application isn't malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
For more information on consent operations, see the following resources:
For more information on consent operations, see the following resources:
* [Incident response playbook - App consent grant investigation](/security/compass/incident-response-playbook-app-consent)
-### End user stopped due to risk-based consent
+### End user stopped due to risk-based consent
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for:<li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
+| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for: high profile or highly privileged accounts, app requests high-risk permissions, or apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/End-userconsentstoppedduetorisk-basedconsent.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-## Application Authentication Flows
-There are several flows defined in the OAuth 2.0 protocol. The recommended flow for an application depends on the type of application that is being built. In some cases, there is a choice of flows available to the application, and in this case, some authentication flows are recommended over others. Specifically, resource owner password credentials (ROPC) should be avoided if at all possible as this requires the user to expose their current password credentials to the application directly. The application then uses those credentials to authenticate the user against the identity provider. Most applications should use the auth code flow, or auth code flow with Proof Key for Code Exchange (PKCE), as this flow is highly recommended.
+## Application authentication flows
+There are several flows in the OAuth 2.0 protocol. The recommended flow for an application depends on the type of application being built. In some cases, there's a choice of flows available to the application. For this case, some authentication flows are recommended over others. Specifically, avoid resource owner password credentials (ROPC) because these require the user to expose their current password credentials to the application. The application then uses the credentials to authenticate the user against the identity provider. Most applications should use the auth code flow, or auth code flow with Proof Key for Code Exchange (PKCE), because this flow is recommended.
-The only scenario where ROPC is suggested is for automated testing of applications. See [Run automated integration tests](../develop/test-automate-integration-testing.md) for details.
+The only scenario where ROPC is suggested is for automated application testing. See [Run automated integration tests](../develop/test-automate-integration-testing.md) for details.
-
-Device code flow is another OAuth 2.0 protocol flow specifically for input constrained devices and is not used in all environments. If this type of flow is seen in the environment and not being used in an input constrained device scenario further investigation is warranted. This can be a misconfigured application or potentially something malicious.
+Device code flow is another OAuth 2.0 protocol flow for input-constrained devices and isn't used in all environments. When device code flow appears in the environment, and isn't used in an input constrained device scenario. More investigation is warranted for a misconfigured application or potentially something malicious.
Monitor application authentication using the following formation: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow.This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)|
-|Applications that are using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices which may not be present in all environments. If successful device code flows are seen without an environment need for them they should be further investigated for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)|
+| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Applications using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices, which may not be in all environments. If successful device code flows appear, without a need for them, investigate for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+ ## Application configuration changes
-Monitor changes to any applicationΓÇÖs configuration. Specifically, configuration changes to the uniform resource identifier (URI), ownership, and logout URL.
+Monitor changes to application configuration. Specifically, configuration changes to the uniform resource identifier (URI), ownership, and log-out URL.
-### Dangling URI and Redirect URI changes
+### Dangling URI and Redirect URI changes
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for dangling URIs, for example, that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own. |
-| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you do not control. |
+| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-Alert anytime these changes are detected.
+Alert when these changes are detected.
### AppID URI added, modified, or removed - | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI. |
+| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationIDURIChanged.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+Alert when these changes are detected outside approved change management procedures.
-Alert any time these changes are detected outside of approved change management procedures.
-
-### New Owner
-
+### New owner
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities. |
+| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationOwnership.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-### Logout URL modified or removed
+### Log-out URL modified or removed
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to logout URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session. |
-
-## Additional Resources
+| Changes to log-out URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationLogoutURL.yaml) |
-The following are links to useful resources:
+## Resources
* GitHub Azure AD toolkit - [https://github.com/microsoft/AzureADToolkit](https://github.com/microsoft/AzureADToolkit)
The following are links to useful resources:
* OAuth attack detection guidance - [Unusual addition of credentials to an OAuth app](/cloud-app-security/investigate-anomaly-alerts)
-Azure AD monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](../..//azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
+* Azure AD monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](../..//azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
- ## Next steps
-
-See these security operations guide articles:
+## Next steps
[Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+ [Security operations for privileged accounts](security-operations-privileged-accounts.md) [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-[Security operations for applications](security-operations-applications.md)
- [Security operations for devices](security-operations-devices.md)
-
-[Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Consumer Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-consumer-accounts.md
+
+ Title: Azure Active Directory security operations for consumer accounts
+description: Guidance to establish baselines and how to monitor and alert on potential security issues with consumer accounts.
+++++++ Last updated : 07/15/2021+++++
+# Azure Active Directory security operations for consumer accounts
+
+Activities associated with consumer identities is another critical area for your organization to protect and monitor. This article is for Azure AD B2C tenants and provides guidance for monitoring consumer account activities. The activities are organized by:
+
+* Consumer account activities
+* Privileged account activities
+* Application activities
+* Infrastructure activities
+
+If you have not yet read the [Azure Active Directory (Azure AD) security operations overview](security-operations-introduction.md), we recommend you do so before proceeding.
+
+## Define a baseline
+
+To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
+
+Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
+
+Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources for accounts created outside of normal processes. The following are suggestions to help you think about and define what normal is for your organization.
+
+* **Consumer account creation** ΓÇô evaluate the following:
+
+ * Strategy and principles for tools and processes used for creating and managing consumer accounts. For example, are there standard attributes, formats that are applied to consumer account attributes.
+
+ * Approved sources for account creation. For example, onboarding custom policies, customer provisioning or migration tool.
+
+ * Alert strategy for accounts created outside of approved sources. Is there a controlled list of organizations your organization collaborates with?
+
+ * Strategy and alert parameters for accounts created, modified, or disabled by an account that isn't an approved consumer account administrator.
+
+ * Monitoring and alert strategy for consumer accounts missing standard attributes, such as customer number or not following organizational naming conventions.
+
+ * Strategy, principles, and process for account deletion and retention.
+
+## Where to look
+
+The log files you use for investigation and monitoring are:
+
+* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md)
+
+* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+
+* [Risky Users log](../identity-protection/howto-identity-protection-investigate-risk.md)
+
+* [UserRiskEvents log](../identity-protection/howto-identity-protection-investigate-risk.md)
+
+From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+
+* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
+
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+
+ The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+
+## Consumer accounts
+
+| What to monitor | Risk Level | Where | Filter / subfilter | Notes |
+| - | - | - | - | - |
+| Large number of account creations or deletions | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) = CPIM Service<br>-and-<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) = CPIM Service | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
+| Accounts created and deleted by non-approved users or processes. | Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>Initiated by (actor) != CPIM Service<br>and-or<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) != CPIM Service | If the actors are non-approved users, configure to send an alert. |
+| Accounts assigned to a privileged role. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) == CPIM Service<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation. |
+| Failed sign-in attempts. | Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
+| Smart lock-out events. | Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
+| Failed authentications from countries you don't operate out of. | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to the city names you provide. |
+| Increased failed authentications of any type. | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a set threshold, monitor and alert if failures increase by 10% or greater. |
+| Account disabled/blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This could indicate someone is trying to gain access to an account after they have left an organization. Although the account is blocked it's important to log and alert on this activity. |
+| Measurable increase of successful sign-ins. | Low | Azure AD Sign-ins log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater. |
+
+## Privileged accounts
+
+| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| - | - | - | - | - |
+| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
+| Failure because of Conditional Access requirement | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account. |
+| Interrupt | High, medium | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker has the password for the account but can't pass the MFA challenge. |
+| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
+| Account disabled or blocked for sign-ins | low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity. |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the MFA prompt, which could indicate an attacker has the password for the account. |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on fraud report tenant-level settings) | Privileged user indicated no instigation of the MFA prompt. This can indicate an attacker has the account password. |
+| Privileged account sign-ins outside of expected controls | High | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert on any entries that you've defined as unapproved. |
+| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats. |
+| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts. |
+| Changes to authentication methods | High | Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+| Identity Provider updated by non-approved actors | High | Azure AD Audit logs | Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+Identity Provider deleted by non-approved actors | High | Azure AD Access Reviews | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+
+## Applications
+
+| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| - | - | - | - | - |
+| Added credentials to existing applications | High | Azure AD Audit logs | Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application | Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal. |
+| App assigned to an Azure role-based access control (RBAC) role, or Azure AD Role | High to medium | Azure AD Audit logs | Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥<br>or<br>ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥ |
+| App granted highly privileged permissions, such as permissions with ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) | High | Azure AD Audit logs |N/A | Apps granted broad permissions such as ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) |
+| Administrator granting either application permissions (app roles) or highly privileged delegated permissions | High | Microsoft 365 portal | ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) ΓÇ£Add delegated permission grantΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions. | Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures. |
+| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on) | Alert as in the preceding row. |
+| Highly privileged delegated permissions are granted on behalf of all users | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>where<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>DelegatedPermissionGrant.Scope includes high-privilege permissions<br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥. | Alert as in the preceding row. |
+| Applications that are using the ROPC authentication flow | Medium | Azure AD Sign-ins log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information |
+| Dangling URI | High | Azure AD Logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example look for dangling URIs that point to a domain name that no longer exists or one you donΓÇÖt own. |
+| Redirect URI configuration changes | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are **not** unique to the application, URIs that point to a domain you don't control. |
+| Changes to AppID URI | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for any AppID URI modifications, such as adding, modifying, or removing the URI. |
+| Changes to application ownership | Medium | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for any instance of a user being added as an application owner outside of normal change management activities. |
+| Changes to log-out URL | Low | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.
+
+## Infrastructure
+
+| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| - | - | - | - | - |
+| New Conditional Access Policy created by non-approved actors | High | Azure AD Audit logs | Activity: Add conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access? |
+| Conditional Access Policy removed by non-approved actors | Medium | Azure AD Audit logs | Activity: Delete conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access? |
+| Conditional Access Policy updated by non-approved actors | High | Azure AD Audit logs | Activity: Update conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value |
+| B2C Custom policy created by non-approved actors | High | Azure AD Audit logs| Activity: Create custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
+| B2C Custom policy updated by non-approved actors | High | Azure AD Audit logs| Activity: Get custom policies<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
+| B2C Custom policy deleted by non-approved actors | Medium |Azure AD Audit logs | Activity: Delete custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
+| User Flow created by non-approved actors | High |Azure AD Audit logs | Activity: Create user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
+| User Flow updated by non-approved actors | High | Azure AD Audit logs| Activity: Update user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
+| User Flow deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
+| API Connectors created by non-approved actors | Medium | Azure AD Audit log| Activity: Create API connector<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
+| API Connectors updated by non-approved actors | Medium | Azure AD Audit logs| Activity: Update API connector<br>Category: ResourceManagement<br>Target: User Principal Name: ResourceManagement | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
+| API Connectors deleted by non-approved actors | Medium | Azure AD Audit log|Activity: Update API connector<br>Category: ResourceManagment<br>Target: User Principal Name: ResourceManagment | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
+| Identity Provider created by non-approved actors | High |Azure AD Audit log | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
+| Identity Provider updated by non-approved actors | High | Azure AD Audit log| Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
+Identity Provider deleted by non-approved actors | Medium | | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
++
+## Next steps
+
+See these security operations guide articles:
+
+[Azure AD security operations overview](security-operations-introduction.md)
+
+[Security operations for user accounts](security-operations-user-accounts.md)
+
+[Security operations for privileged accounts](security-operations-privileged-accounts.md)
+
+[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+
+[Security operations for applications](security-operations-applications.md)
+
+[Security operations for devices](security-operations-devices.md)
+
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
# Azure Active Directory security operations for devices
-Devices aren't commonly targeted in identity-based attacks, but *can* be used to satisfy and trick security controls, or to impersonate users. Devices can have one of four relationships with Azure AD:
+Devices aren't commonly targeted in identity-based attacks, but *can* be used to satisfy and trick security controls, or to impersonate users. Devices can have one of four relationships with Azure AD:
* Unregistered
Devices aren't commonly targeted in identity-based attacks, but *can* be used to
* [Azure AD joined](../devices/concept-azure-ad-join.md)
-* [Hybrid Azure AD joined](../devices/concept-azure-ad-join-hybrid.md)
-ΓÇÄ
+* [Hybrid Azure AD joined](../devices/concept-azure-ad-join-hybrid.md)
Registered and joined devices are issued a [Primary Refresh Token (PRT),](../devices/concept-primary-refresh-token.md) which can be used as a primary authentication artifact, and in some cases as a multifactor authentication artifact. Attackers may try to register their own devices, use PRTs on legitimate devices to access business data, steal PRT-based tokens from legitimate user devices, or find misconfigurations in device-based controls in Azure Active Directory. With Hybrid Azure AD joined devices, the join process is initiated and controlled by administrators, reducing the available attack methods.
To reduce the risk of bad actors attacking your infrastructure through devices,
## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../..//key-vault/general/logging.md?tabs=Vault) From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../..//azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-Much of what you'll monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+Much of what you'll monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies including device state. This workbook enables you to view a summary, and identify the effects over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
- The rest of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+ The rest of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
- ## Device registrations and joins outside policy
+## Device registrations and joins outside policy
-Azure AD registered and Azure AD joined devices possess primary refresh tokens (PRTs), which are the equivalent of a single authentication factor. These devices can at times contain strong authentication claims. For more information on when PRTs contain strong authentication claims, see [When does a PRT get an MFA claim](../devices/concept-primary-refresh-token.md)? To keep bad actors from registering or joining devices, require multifactor authentication (MFA) to register or join devices. Then monitor for any devices registered or joined without MFA. YouΓÇÖll also need to watch for changes to MFA settings and policies, and device compliance policies.
+Azure AD registered and Azure AD joined devices possess primary refresh tokens (PRTs), which are the equivalent of a single authentication factor. These devices can at times contain strong authentication claims. For more information on when PRTs contain strong authentication claims, see [When does a PRT get an MFA claim](../devices/concept-primary-refresh-token.md)? To keep bad actors from registering or joining devices, require multi-factor authentication (MFA) to register or join devices. Then monitor for any devices registered or joined without MFA. YouΓÇÖll also need to watch for changes to MFA settings and policies, and device compliance policies.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: <br>Any device registered or joined without MFA<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: <br>The toggle being set to off. There isn't audit log entry. Schedule periodic checks. |
-| Changes to Conditional Access policies requiring domain joined or compliant device.| High| Audit log| Changes to CA policies<br>| Alert when: <br><li> Change to any policy requiring domain joined or compliant.<li>Changes to trusted locations.<li> Accounts or devices added to MFA policy exceptions. |
-
+| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: Any device registered or joined without MFA<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: The toggle being set to off. There isn't audit log entry. Schedule periodic checks.<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to Conditional Access policies requiring domain joined or compliant device.| High| Audit log| Changes to CA policies<br>| Alert when: Change to any policy requiring domain joined or compliant, changes to trusted locations, or accounts or devices added to MFA policy exceptions. |
You can create an alert that notifies appropriate administrators when a device is registered or joined without MFA by using Microsoft Sentinel.-
-```
+~~~
Sign-in logs
-| where ResourceDisplayName == "Device Registration Service"
+ where ResourceDisplayName == "Device Registration Service"
-| where conditionalAccessStatus == "success"
+ where conditionalAccessStatus == "success"
-| where AuthenticationRequirement <> "multiFactorAuthentication"
-```
+ where AuthenticationRequirement <> "multiFactorAuthentication"
+~~~
You can also use [Microsoft Intune to set and monitor device compliance policies](/mem/intune/protect/device-compliance-get-started).
-## Non-compliant device sign in
+## Non-compliant device sign-in
-It might not be possible to block access to all cloud and software-as-a-service applications with Conditional Access policies requiring compliant devices.
+It might not be possible to block access to all cloud and software-as-a-service applications with Conditional Access policies requiring compliant devices.
-[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Azure Active Directory can [integrate with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status.
+[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Azure Active Directory can [integrate with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when:<br><li> any sign in by non-compliant devices.<li> any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Sign-ins by unknown devices| Low| Sign-in logs| <li>DeviceDetail is empty<li>Single factor authentication<li>From a non-trusted location| Look for: <br><li>any access from out of compliance devices.<li>any access without MFA or trusted location |
-
+| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when: any sign in by non-compliant devices, or any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuccessfulSigninFromNon-CompliantDevice.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Sign-ins by unknown devices| Low| Sign-in logs| DeviceDetail is empty, single factor authentication, or from a non-trusted location| Look for: any access from out of compliance devices, any access without MFA or trusted location<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AnomolousSingleFactorSignin.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Use LogAnalytics to query
SigninLogs
| where conditionalAccessStatus == "success" ```
-
**Sign-ins by unknown devices**
SigninLogs
| where NetworkLocationDetails == "[]" ```
-
+ ## Stale devices
-Stale devices include devices that haven't signed in for a specified time period. Devices can become stale when a user gets a new device or loses a device, or when an Azure AD joined device is wiped or reprovisioned. Devices may also remain registered or joined when the user is no longer associated with the tenant. Stale devices should be removed so that their primary refresh tokens (PRTs) cannot be used.
+Stale devices include devices that haven't signed in for a specified time period. Devices can become stale when a user gets a new device or loses a device, or when an Azure AD joined device is wiped or reprovisioned. Devices might also remain registered or joined when the user is no longer associated with the tenant. Stale devices should be removed so the primary refresh tokens (PRTs) cannot be used.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/w
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for <br><li>key retrieval`<li> other anomalous behavior by users retrieving keys. |
-
+| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for: key retrieval, other anomalous behavior by users retrieving keys.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/BitLockerKeyRetrieval.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
In LogAnalytics create a query such as
Global administrators and cloud Device Administrators automatically get local ad
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for:<li> new users added to these Azure AD roles.<li> Subsequent anomalous behavior by machines or users. |
-
+| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for: new users added to these Azure AD roles, subsequent anomalous behavior by machines or users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Non-Azure AD sign-ins to virtual machines
Azure AD sign-in for LINUX allows organizations to sign in to their Azure LINUX
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Non-Azure AD account signing in, especially over SSH| High| Local authentication logs| Ubuntu: <br>ΓÇÄmonitor /var/log/auth.log for SSH use<br>RedHat: <br>monitor /var/log/sssd/ for SSH use| Look for:<li> entries [where non-Azure AD accounts are successfully connecting to VMs.](../devices/howto-vm-sign-in-azure-ad-linux.md) <li>See following example. |
-
+| Non-Azure AD account signing in, especially over SSH| High| Local authentication logs| Ubuntu: <br>monitor /var/log/auth.log for SSH use<br>RedHat: <br>monitor /var/log/sssd/ for SSH use| Look for: entries [where non-Azure AD accounts are successfully connecting to VMs](../devices/howto-vm-sign-in-azure-ad-linux.md). See following example. |
Ubuntu example:
Ubuntu example:
May 9 23:49:43 ubuntu1804 sshd[3909]: pam_unix(sshd:session): session opened for user localusertest01 by (uid=0).
-You can set policy for LINUX VM sign-ins, and detect and flag Linux VMs that have non-approved local accounts added. To learn more, see using [Azure Policy to ensure standards and assess compliance](../devices/howto-vm-sign-in-azure-ad-linux.md).
+You can set policy for LINUX VM sign-ins, and detect and flag Linux VMs that have non-approved local accounts added. To learn more, see using [Azure Policy to ensure standards and assess compliance](../devices/howto-vm-sign-in-azure-ad-linux.md).
### Azure AD sign-ins for Windows Server
Azure AD sign-in for Windows allows your organization to sign in to your Azure W
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Non-Azure AD account signing in, especially over RDP| High| Windows Server event logs| Interactive Login to Windows VM| Event 528, logon type 10 (RemoteInteractive).<br>Shows when a user signs in over Terminal Services or Remote Desktop. |
-
+| Non-Azure AD account sign-in, especially over RDP| High| Windows Server event logs| Interactive Login to Windows VM| Event 528, log-on type 10 (RemoteInteractive).<br>Shows when a user signs in over Terminal Services or Remote Desktop. |
-## Next Steps
-
-See these additional security operations guide articles:
+## Next steps
[Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+ [Security operations for privileged accounts](security-operations-privileged-accounts.md) [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) [Security operations for applications](security-operations-applications.md)
-[Security operations for devices](security-operations-devices.md)
-
-
[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
Infrastructure has many components where vulnerabilities can occur if not proper
* Hybrid Authentication components incl. Federation Servers
-* Policies
+* Policies
* Subscriptions
-Monitoring and alerting the components of your authentication infrastructure is critical. Any compromise can lead to a full compromise of the whole environment. Many enterprises that use Azure AD operate in a hybrid authentication environment. This means both cloud and on-premises components should be included in your monitoring and alerting strategy. Having a hybrid authentication environment also introduces another attack vector to your environment.
+Monitoring and alerting the components of your authentication infrastructure is critical. Any compromise can lead to a full compromise of the whole environment. Many enterprises that use Azure AD operate in a hybrid authentication environment. Cloud and on-premises components should be included in your monitoring and alerting strategy. Having a hybrid authentication environment also introduces another attack vector to your environment.
-We recommend all the components be considered Control Plane / Tier 0 assets, as well as the accounts used to manage them. Refer to [Securing privileged assets](/security/compass/overview) (SPA) for guidance on designing and implementing your environment. This guidance includes recommendations for each of the hybrid authentication components that could potentially be used for an Azure AD tenant.
+We recommend all the components be considered Control Plane / Tier 0 assets, and the accounts used to manage them. Refer to [Securing privileged assets](/security/compass/overview) (SPA) for guidance on designing and implementing your environment. This guidance includes recommendations for each of the hybrid authentication components that could potentially be used for an Azure AD tenant.
A first step in being able to detect unexpected events and potential attacks is to establish a baseline. For all on-premises components listed in this article, see [Privileged access deployment](/security/compass/privileged-access-deployment), which is part of the Securing privileged assets (SPA) guide. ## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
-From the Azure portal you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* [Microsoft Sentinel](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* [Azure Monitor](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* [Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô Enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-The remainder of this article describes what you should monitor and alert on and is organized by the type of threat. Where there are specific pre-built solutions, you will find links to them following the table. Otherwise, you can build alerts using the preceding tools.
+The remainder of this article describes what to monitor and alert on. It is organized by the type of threat. Where there are pre-built solutions, you'll find links to them, after the table. Otherwise, you can build alerts using the preceding tools.
## Authentication infrastructure
In hybrid environments that contain both on-premises and cloud-based resources a
* [Securing privileged access overview](/security/compass/overview) ΓÇô This article provides an overview of current techniques using Zero Trust techniques to create and maintain secure privileged access.
-* [Microsoft Defender for Identity monitored domain activities](/defender-for-identity/monitored-activities) - This article provides a comprehensive list of activities to monitor and set alerts for.
+* [Microsoft Defender for Identity monitored domain activities](/defender-for-identity/monitored-activities) - This article provides a comprehensive list of activities to monitor and set alerts for.
* [Microsoft Defender for Identity security alert tutorial](/defender-for-identity/understanding-security-alerts) - This article provides guidance on creating and implementing a security alert strategy. The following are links to specific articles that focus on monitoring and alerting your authentication infrastructure:
-* [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path) - This article describes detection techniques you can use to help identify when non-sensitive accounts are used to gain access to sensitive accounts throughout your network.
+* [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path) - Detection techniques to help identify when non-sensitive accounts are used to gain access to sensitive network accounts.
-* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/working-with-suspicious-activities) - This article describes how to review and manage alerts once they are logged.
+* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/working-with-suspicious-activities) - This article describes how to review and manage alerts after they're logged.
The following are specific things to look for: | What to monitor| Risk level| Where| Notes | | - | - | - | - |
-| Extranet lockout trends| High| Azure AD Connect Health| Use information at [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) for tools and techniques to help detect extranet lockout trends. |
+| Extranet lockout trends| High| Azure AD Connect Health| See, [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) for tools and techniques to help detect extranet lock-out trends. |
| Failed sign-ins|High | Connect Health Portal| Export or download the Risky IP report and follow the guidance at [Risky IP report (public preview)](../hybrid/how-to-connect-health-adfs-risky-ip.md) for next steps. |
-| In privacy compliant| Low| Azure AD Connect Health| Configure Azure AD Connect Health to be disable data collections and monitoring using the [User privacy and Azure AD Connect Health](../hybrid/reference-connect-health-user-privacy.md) article. |
+| In privacy compliant| Low| Azure AD Connect Health| Configure Azure AD Connect Health to disable data collections and monitoring using the [User privacy and Azure AD Connect Health](../hybrid/reference-connect-health-user-privacy.md) article. |
| Potential brute force attack on LDAP| Medium| Microsoft Defender for Identity| Use sensor to help detect potential brute force attacks against LDAP. | | Account enumeration reconnaissance| Medium| Microsoft Defender for Identity| Use sensor to help perform account enumeration reconnaissance. | | General correlation between Azure AD and Azure AD FS|Medium | Microsoft Defender for Identity| Use capabilities to correlate activities between your Azure AD and Azure AD FS environments. |
+### Pass-through authentication monitoring
-
-
-### Pass-through authentication monitoring
-
-Azure Active Directory (Azure AD) Pass-through Authentication signs users in by validating their passwords directly against on-premises Active Directory.
+Azure Active Directory (Azure AD) Pass-through Authentication signs users in by validating their passwords directly against on-premises Active Directory.
The following are specific things to look for: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Azure AD pass-through authentication errors|Medium | Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80001 ΓÇô Unable to connect to Active Directory| Ensure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they can connect to Active Directory. |
-| Azure AD pass-through authentication errors| Medium| Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS8002 - A timeout occurred connecting to Active Directory| Check to ensure that Active Directory is available and is responding to requests from the agents. |
-| Azure AD pass-through authentication errors|Medium | Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80004 - The username passed to the agent was not valid| Ensure the user is attempting to sign in with the right username. |
-| Azure AD pass-through authentication errors|Medium | Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80005 - Validation encountered unpredictable WebException| A transient error. Retry the request. If it continues to fail, contact Microsoft support. |
-| Azure AD pass-through authentication errors| Medium| Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80007 - An error occurred communicating with Active Directory| Check the agent logs for more information and verify that Active Directory is operating as expected. |
-| Azure AD pass-through authentication errors|High | Win32 LogonUserA function API| Logon events 4624(s): An account was successfully logged on<br>- correlate with ΓÇô<br>4625(F): An account failed to log on| Use with the suspected usernames on the domain controller that is authenticating requests. Guidance at [LogonUserA function (winbase.h)](/windows/win32/api/winbase/nf-winbase-logonusera) |
-| Azure AD pass-through authentication errors| Medium| PowerShell script of domain controller| see query following table. | Use the information at [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md)for additional guidance. |
+| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80001 ΓÇô Unable to connect to Active Directory| Ensure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they can connect to Active Directory. |
+| Azure AD pass-through authentication errors| Medium| Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS8002 - A timeout occurred connecting to Active Directory| Check to ensure that Active Directory is available and is responding to requests from the agents. |
+| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80004 - The username passed to the agent was not valid| Ensure the user is attempting to sign in with the right username. |
+| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80005 - Validation encountered unpredictable WebException| A transient error. Retry the request. If it continues to fail, contact Microsoft support. |
+| Azure AD pass-through authentication errors| Medium| Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80007 - An error occurred communicating with Active Directory| Check the agent logs for more information and verify that Active Directory is operating as expected. |
+| Azure AD pass-through authentication errors|High | Win32 LogonUserA function API| Log on events 4624(s): An account was successfully logged on<br>- correlate with ΓÇô<br>4625(F): An account failed to log on| Use with the suspected usernames on the domain controller that is authenticating requests. Guidance at [LogonUserA function (winbase.h)](/windows/win32/api/winbase/nf-winbase-logonusera) |
+| Azure AD pass-through authentication errors| Medium| PowerShell script of domain controller| See the query after the table. | Use the information at [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md)for guidance. |
```Kusto
The following are specific things to look for:
</QueryList> ```
+## Monitoring for creation of new Azure AD tenants
+
+Organizations might need to monitor for and alert on the creation of new Azure AD tenants when the action is initiated by identities from their organizational tenant. Monitoring for this scenario provides visibility on how many tenants are being created and could be accessed by end users.
+
+| What to monitor| Risk level| Where| Filter/sub-filter| Notes |
+| - | - | - | - | - |
+| Creation of a new Azure AD tenant, using an identity from your tenant. | Medium | Azure AD Audit logs | Category: Directory Management<br><br>Activity: Create Company | Target(s) shows the created TenantID |
+ ### AppProxy Connector
-Azure AD and Azure AD Application Proxy give remote users a single sign-on (SSO) experience. Users securely connect to on-premises apps without a virtual private network (VPN) or dual-homed servers and firewall rules. If your Azure AD Application Proxy connector server is compromised, attackers could alter the SSO experience or change access to published applications.
+Azure AD and Azure AD Application Proxy give remote users a single sign-on (SSO) experience. Users securely connect to on-premises apps without a virtual private network (VPN) or dual-homed servers and firewall rules. If your Azure AD Application Proxy connector server is compromised, attackers could alter the SSO experience or change access to published applications.
-To configuring monitoring for Application Proxy, see [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). The data file that logs information can be found in Applications and Services Logs\Microsoft\AadApplicationProxy\Connector\Admin. For a complete reference guide to audit activity, see [Azure AD audit activity reference](../reports-monitoring/reference-audit-activities.md). Specific things to monitor:
+To configure monitoring for Application Proxy, see [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). The data file that logs information can be found in Applications and Services Logs\Microsoft\AadApplicationProxy\Connector\Admin. For a complete reference guide to audit activity, see [Azure AD audit activity reference](../reports-monitoring/reference-audit-activities.md). Specific things to monitor:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - | | Kerberos errors| Medium | Various tools| Medium | Kerberos authentication error guidance under Kerberos errors on [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). | | DC security issues| High| DC Security Audit logs| Event ID 4742(S): A computer account was changed<br>-and-<br>Flag ΓÇô Trusted for Delegation<br>-or-<br>Flag ΓÇô Trusted to Authenticate for Delegation| Investigate any flag change. |
-| Pass-the-ticket like attacks| High| | | Follow guidance in:<li>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-alerts)<li>[Tutorial: Compromised credential alerts](/defender-for-identity/compromised-credentials-alerts)<li> [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<li> [Understanding entity profiles](/defender-for-identity/entity-profiles) |
-
+| Pass-the-ticket like attacks| High| | | Follow guidance in:<br>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-alerts)<br>[Tutorial: Compromised credential alerts](/defender-for-identity/compromised-credentials-alerts)<br>[Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<br>[Understanding entity profiles](/defender-for-identity/entity-profiles) |
### Legacy authentication settings
-For multifactor authentication (MFA) to be effective, you also need to block legacy authentication. You then need to monitor your environment and alert on any use of legacy authentication. This is because legacy authentication protocols like POP, SMTP, IMAP, and MAPI canΓÇÖt enforce MFA. This makes these protocols preferred entry points for attackers of your organization. For more information on tools that you can use to block legacy authentication, see [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302).
+For multifactor authentication (MFA) to be effective, you also need to block legacy authentication. You then need to monitor your environment and alert on any use of legacy authentication. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI canΓÇÖt enforce MFA. This makes these protocols the preferred entry points for attackers. For more information on tools that you can use to block legacy authentication, see [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302).
Legacy authentication is captured in the Azure AD Sign-ins log as part of the detail of the event. You can use the Azure Monitor workbook to help with identifying legacy authentication usage. For more information, see [Sign-ins using legacy authentication](../reports-monitoring/howto-use-azure-monitor-workbooks.md), which is part of [How to use Azure Monitor Workbooks for Azure Active Directory reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). You can also use the Insecure protocols workbook for Microsoft Sentinel. For more information, see [Microsoft Sentinel Insecure Protocols Workbook Implementation Guide](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564). Specific activities to monitor include: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Legacy authentications|High | Azure AD Sign-ins log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications are not recorded so will not appear in the log. |
-
+| Legacy authentications|High | Azure AD Sign-ins log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications aren't recorded and don't appear in the log. |
## Azure AD Connect
Azure AD Connect provides a centralized location that enables account and attrib
* [Password hash synchronization](../hybrid/whatis-phs.md) - A sign-in method that synchronizes a hash of a userΓÇÖs on-premises AD password with Azure AD.
-* [Synchronization](../hybrid/how-to-connect-sync-whatis.md) - Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.
+* [Synchronization](../hybrid/how-to-connect-sync-whatis.md) - Responsible for creating users, groups, and other objects. And, making sure identity information for your on-premises users and groups matches the cloud. This synchronization also includes password hashes.
* [Health Monitoring](../hybrid/whatis-azure-ad-connect.md) - Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.
-Synchronizing identity between your on-premises environment and you cloud environment introduces a new attack surface for your on-premises and cloud-based environment. We recommend:
+Synchronizing identity between your on-premises environment and your cloud environment introduces a new attack surface for your on-premises and cloud-based environment. We recommend:
-* You treat your Azure AD Connect primary and staging servers as Tier 0 Systems in your control plane.
+* You treat your Azure AD Connect primary and staging servers as Tier 0 Systems in your control plane.
* You follow a standard set of policies that govern each type of account and its usage in your environment.
-* You install Azure AD Connect and Connect Health. These primarily provide operational data for the environment.
+* You install Azure AD Connect and Connect Health. These primarily provide operational data for the environment.
Logging of Azure AD Connect operations occurs in different ways:
Azure AD uses Microsoft SQL Server Data Engine or SQL to store Azure AD Connect
| mms_server_configuration| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | | mms_synchronization_rule| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | - For information on what and how to monitor configuration information refer to: * For SQL server, see [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records).
-* For Microsoft Sentinel, see [Connect to Windows servers to collect security events](/sql/relational-databases/security/auditing/sql-server-audit-records).
+* For Microsoft Sentinel, see [Connect to Windows servers to collect security events](/sql/relational-databases/security/auditing/sql-server-audit-records).
* For information on configuring and using Azure AD Connect, see [What is Azure AD Connect?](../hybrid/whatis-azure-ad-connect.md) ### Monitoring and troubleshooting synchronization
- One function of Azure AD Connect is to synchronize hash synchronization between a userΓÇÖs on-premises password and Azure AD. If passwords are not synchronizing as expected, the synchronization might affect a subset of users or all users. Use the following to help verify proper operation or troubleshoot issues:
+ One function of Azure AD Connect is to synchronize hash synchronization between a userΓÇÖs on-premises password and Azure AD. If passwords aren't synchronizing as expected, the synchronization might affect a subset of users or all users. Use the following to help verify proper operation or troubleshoot issues:
-* Information for checking and troubleshooting hash synchronization, see [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md).
+* Information for checking and troubleshooting hash synchronization, see [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md).
-* Modifications to the connector spaces, see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes).
+* Modifications to the connector spaces, see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes).
**Important resources on monitoring**
For information on what and how to monitor configuration information refer to:
| - | - | | Hash synchronization validation|See [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md) | Modifications to the connector spaces|see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes) |
-| Modifications to the rules you configured| Specifically, monitor filtering changes, domain and OU changes, attribute changes, and group-based changes |
+| Modifications to rules you configured| Monitor changes to: filtering, domain and OU, attribute, and group-based changes |
| SQL and MSDE changes | Changes to logging parameters and addition of custom functions |
-**Monitor the following**:
+**Monitor the following**:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - | | Scheduler changes|High | PowerShell| Set-ADSyncScheduler| Look for modifications to schedule | | Changes to scheduled tasks| High | Azure AD Audit logs| Activity = 4699(S): A scheduled task was deleted<br>-or-<br>Activity = 4701(s): A scheduled task was disabled<br>-or-<br>Activity = 4701(s): A scheduled task was updated| Monitor all | --
-* For more information on logging PowerShell script operations, refer to [Enabling Script Block Logging](/powershell/module/microsoft.powershell.core/about/about_logging_windows), which is part of the PowerShell reference documentation.
+* For more information on logging PowerShell script operations, see [Enabling Script Block Logging](/powershell/module/microsoft.powershell.core/about/about_logging_windows), which is part of the PowerShell reference documentation.
* For more information on configuring PowerShell logging for analysis by Splunk, refer to [Get Data into Splunk User Behavior Analytics](https://docs.splunk.com/Documentation/UBA/5.0.4.1/GetDataIn/AddPowerShell). ### Monitoring seamless single sign-on
-Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without needing any additional on-premises components. SSO uses the pass-through authentication and password hash synchronization capabilities provided by Azure AD Connect.
+Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without other on-premises components. SSO uses the pass-through authentication and password hash synchronization capabilities provided by Azure AD Connect.
Monitoring single sign-on and Kerberos activity can help you detect general credential theft attack patterns. Monitor using the following information:
Monitoring single sign-on and Kerberos activity can help you detect general cred
</QueryList> ```+ ## Password protection policies
-If you deploy Azure AD Password Protection, monitoring and reporting are essential tasks. The following links provide details to help you understand various monitoring techniques, including where each service logs information and how to report on the use of Azure AD Password Protection.
+If you deploy Azure AD Password Protection, monitoring and reporting are essential tasks. The following links provide details to help you understand various monitoring techniques, including where each service logs information and how to report on the use of Azure AD Password Protection.
-The domain controller (DC) agent and proxy services both log event log messages. All PowerShell cmdlets described below are only available on the proxy server (see the AzureADPasswordProtection PowerShell module). The DC agent software does not install a PowerShell module.
+The domain controller (DC) agent and proxy services both log event log messages. All PowerShell cmdlets described below are only available on the proxy server (see the AzureADPasswordProtection PowerShell module). The DC agent software doesn't install a PowerShell module.
Detailed information for planning and implementing on-premises password protection is available at [Plan and deploy on-premises Azure Active Directory Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md). For monitoring details, see [Monitor on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-monitor.md). On each domain controller, the DC agent service software writes the results of each individual password validation operation (and other status) to the following local event log:
The DC agent Admin log is the primary source of information for how the software
* Azure AD Audit Log, Category Application Proxy
-Complete reference for Azure AD audit activities is available at [Azure Active Directory (Azure AD) audit activity reference](../reports-monitoring/reference-audit-activities.md).
+Complete reference for Azure AD audit activities is available at [Azure Active Directory (Azure AD) audit activity reference](../reports-monitoring/reference-audit-activities.md).
## Conditional Access
-In Azure AD, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure that your Conditional Access policies work as expected to ensure that your resources are properly protected. Monitoring and alerting on changes to the Conditional Access service is critical to ensure that polices defined by your organization for access to data are enforced correctly. Azure AD logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage.
+In Azure AD, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure your Conditional Access policies work as expected to ensure that your resources are protected. Monitoring and alerting on changes to the Conditional Access service ensures policies defined by your organization for access to data are enforced. Azure AD logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage.
**Workbook Links**
Monitor changes to Conditional Access policies using the following information:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?|
-|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?|
-|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value|
-|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.|
-|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.|
+| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
## Next steps -
-See these additional security operations guide articles:
- [Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+ [Security operations for privileged accounts](security-operations-privileged-accounts.md) [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
See these additional security operations guide articles:
[Security operations for applications](security-operations-applications.md) [Security operations for devices](security-operations-devices.md)
-
-[Security operations for infrastructure](security-operations-infrastructure.md)
--
-
-
-
-ΓÇÄ
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Previously updated : 08/24/2022 Last updated : 09/06/2022 - it-pro
As you audit your current security operations or establish security operations f
### Audience
-The Azure AD SecOps Guide is intended for enterprise IT identity and security operations teams and managed service providers that need to counter threats through better identity security configuration and monitoring profiles. This guide is especially relevant for IT administrators and identity architects advising Security Operations Center (SOC) defensive and penetration testing teams to improve and maintain their identity security posture.
+The Azure AD SecOps Guide is intended for enterprise IT identity and security operations teams and managed service providers that need to counter threats through better identity security configuration and monitoring profiles. This guide is especially relevant for IT administrators and identity architects advising Security Operations Center (SOC) defensive and penetration testing teams to improve and maintain their identity security posture.
### Scope
The log files you use for investigation and monitoring are:
From the Azure portal, you can view the Azure AD Audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+
+* **[Azure Monitor](../../azure-monitor/overview.md)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Azure AD logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** - Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
-* **[Securing workload identities with Identity Protection Preview](../identity-protection/concept-workload-identity-risk.md)**. Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+* **[Securing workload identities with Identity Protection Preview](../identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the Conditional Access insights and reporting workbook to examine the effects of one or more Conditional Access policies on your sign-ins and the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. For more information, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
If you don't plan to use Microsoft Defender for Identity, monitor your domain co
As part of an Azure hybrid environment, the following items should be baselined and included in your monitoring and alerting strategy.
-* **PTA Agent**. The pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps.
+* **PTA Agent** - The pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps.
-* **AD FS/WAP**. Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs).
+* **AD FS/WAP** - Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs).
-* **Azure AD Connect Health Agent**. The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md).
+* **Azure AD Connect Health Agent** - The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md).
-* **Azure AD Connect Sync Engine**. The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md).
+* **Azure AD Connect Sync Engine** - The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md).
-* **Password Protection DC agent**. Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
+* **Password Protection DC agent** - Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
-* **Password Filter DLL**. The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
+* **Password Filter DLL** - The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
-* **Password writeback Agent**. Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory](../authentication/concept-sspr-writeback.md).
+* **Password writeback Agent** - Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory](../authentication/concept-sspr-writeback.md).
-* **Azure AD Application Proxy Connector**. Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
+* **Azure AD Application Proxy Connector** - Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
## Components of cloud-based authentication As part of an Azure cloud-based environment, the following items should be baselined and included in your monitoring and alerting strategy.
-* **Azure AD Application Proxy**. This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md).
+* **Azure AD Application Proxy** - This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md).
-* **Azure AD Connect**. Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
+* **Azure AD Connect** - Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
-* **Azure AD Connect Health**. Service Health provides you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md).
+* **Azure AD Connect Health** - Service Health provides you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md).
-* **Azure AD multifactor authentication**. Multifactor authentication requires a user to provide more than one form of proof for authentication. This approach can provide a proactive first step to securing your environment. For more information, see [Azure AD multi-factor authentication](../authentication/concept-mfa-howitworks.md).
+* **Azure AD multifactor authentication** - Multifactor authentication requires a user to provide more than one form of proof for authentication. This approach can provide a proactive first step to securing your environment. For more information, see [Azure AD multi-factor authentication](../authentication/concept-mfa-howitworks.md).
-* **Dynamic groups**. Dynamic configuration of security group membership for Azure AD Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md).
+* **Dynamic groups** - Dynamic configuration of security group membership for Azure AD Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md).
-* **Conditional Access**. Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md).
+* **Conditional Access** - Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md).
-* **Identity Protection**. A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md).
+* **Identity Protection** - A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md).
-* **Group-based licensing**. Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users.
+* **Group-based licensing** - Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users.
-* **Provisioning Service**. Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md).
+* **Provisioning Service** - Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md).
-* **Graph API**. The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
+* **Graph API** - The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
-* **Domain Service**. Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md).
+* **Domain Service** - Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md).
-* **Azure Resource Manager**. Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+* **Azure Resource Manager** - Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md).
-* **Managed identity**. Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md).
+* **Managed identity** - Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md).
-* **Privileged Identity Management**. PIM is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
+* **Privileged Identity Management** - PIM is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
-* **Access reviews**. Azure AD access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed regularly to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews](../governance/access-reviews-overview.md).
+* **Access reviews** - Azure AD access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed regularly to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews](../governance/access-reviews-overview.md).
-* **Entitlement management**. Azure AD entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management](../governance/entitlement-management-overview.md).
+* **Entitlement management** - Azure AD entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management](../governance/entitlement-management-overview.md).
-* **Activity logs**. The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
+* **Activity logs** - The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
-* **Self-service password reset service**. Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md).
+* **Self-service password reset service** - Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md).
-* **Device services**. Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity](../devices/overview.md).
+* **Device services** - Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity](../devices/overview.md).
-* **Self-service group management**. You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
+* **Self-service group management** - You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
-* **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
+* **Risk detections** - Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
## Next steps See these security operations guide articles:
-* [Azure AD security operations overview](security-operations-introduction.md)
-* [Security operations for user accounts](security-operations-user-accounts.md)
-* [Security operations for privileged accounts](security-operations-privileged-accounts.md)
-* [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-* [Security operations for applications](security-operations-applications.md)
-* [Security operations for devices](security-operations-devices.md)
-* [Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for user accounts](security-operations-user-accounts.md)
+
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+
+[Security operations for privileged accounts](security-operations-privileged-accounts.md)
+
+[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+
+[Security operations for applications](security-operations-applications.md)
+
+[Security operations for devices](security-operations-devices.md)
+
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
Previously updated : 04/29/2022- Last updated : 09/06/2022+
You're entirely responsible for all layers of security for your on-premises IT e
The log files you use for investigation and monitoring are: * [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md)+ * [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)+ * [Azure Key Vault insights](../../key-vault/key-vault-insights-overview.md) From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: * **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+ * **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.+ * **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Enables Azure AD logs to be pushed to other SIEMs such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).+ * **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.+ * **Microsoft Graph**. Enables you to export data and use Microsoft Graph to do more analysis. For more information, see [Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md).+ * **[Identity Protection](../identity-protection/overview-identity-protection.md)**. Generates three key reports you can use to help with your investigation: * **Risky users**. Contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history.
+
* **Risky sign-ins**. Contains information about a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
+
* **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps. * **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)**. Use to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| What to monitor | Risk level | Where | Filter/subfilter | Notes | | - | - | - | - | - |
-| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml) |
-| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
+| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multifactor authentication challenge.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
+| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Privileged accounts that don't follow naming policy| High | Azure AD directory | [List Azure AD role assignments](../roles/view-assignments.md)| List role assignments for Azure AD roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Discover privileged accounts not registered for multifactor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
-| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml) |
-| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
-| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
+| Discover privileged accounts not registered for multi-factor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
+| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Identity protection risk | High | Identity Protection logs | Risk state = At risk<br>-and-<br>Risk level = Low, medium, high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, and so on | This event indicates there's some abnormality detected with the sign-in for the account and should be alerted on. |
-| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml) |
-| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack. |
-| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected. |
-| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID |
-|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.
+| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Changes by privileged accounts
Privileged accounts that have been assigned permissions in Azure AD Domain Servi
* [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md) * [Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use)
-| What to monitor | Risk level | Where | Filter/subfilter | Notes |
-||||-|-|
-| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Azure Active Directory security operations guide for Applications](security-operations-applications.md). |
-| **EXAMPLE**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | <li>Date and time <li>Service <li>Category and name of the activity <li>Status = Success or failure <li>Target <li>Initiator or actor (who) |
-| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
+| What to monitor | Risk level | Where | Filter/subfilter | Notes |
+| - | - | - | - | - |
+| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Azure Active Directory security operations guide for Applications](security-operations-applications.md). |
+| **Example**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | Date and time, Service, Category and name of the activity, Status = Success or failure, Target, Initiator or actor (who) |
+| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
## Changes to privileged accounts
Investigate changes to privileged accounts' authentication rules and privileges,
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
-| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml) |
-| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities. |
-| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts. |
+| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivilegedAccountPermissionsChanged.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts.<br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts later in this article.|
-| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.
+| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AdditionofaTemporaryAccessPasstoaPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on how to monitor for exceptions to Conditional Access policies, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
You can monitor privileged account changes by using Azure AD Audit logs and Azur
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
-| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml) |
-| Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned. |
-| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml) |
-| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account. |
-| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account. |
+| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AccountElevatedtoNewRole.yaml) |
+| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account.<br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log tab <br>Directory Activity tab <br> Operations Name = Assigns the caller to user access admin <br> -and- <br> Event category = Administrative <br> -and-<br>Status = Succeeded, start, fail<br>-and-<br>Event initiated by| This change should be investigated immediately if it isn't planned. This setting could allow an attacker access to Azure subscriptions in your environment. | For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). For information on monitoring elevations by using information available in the Azure AD logs, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md), which is part of the Azure Monitor documentation.
For information about configuring alerts for Azure roles, see [Configure securit
See these security operations guide articles:
-* [Azure AD security operations overview](security-operations-introduction.md)
-* [Security operations for user accounts](security-operations-user-accounts.md)
-* [Security operations for privileged accounts](security-operations-privileged-accounts.md)
-* [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-* [Security operations for applications](security-operations-applications.md)
-* [Security operations for devices](security-operations-devices.md)
-* [Security operations for infrastructure](security-operations-infrastructure.md)
+[Azure AD security operations overview](security-operations-introduction.md)
+
+[Security operations for user accounts](security-operations-user-accounts.md)
+
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+
+[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+
+[Security operations for applications](security-operations-applications.md)
+
+[Security operations for devices](security-operations-devices.md)
+
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
Title: Azure Active Directory security operations for Privileged Identity Management
-description: Guidance to establish baselines and use Azure Active Directory Privileged Identity Management (PIM) to monitor and alert on potential issues with accounts that are governed by PIM.
+description: Establish baselines and use Azure AD Privileged Identity Management (PIM) to monitor and alert on issues with accounts governed by PIM.
Previously updated : 08/19/2022 Last updated : 09/06/2022
-# Azure Active Directory security operations for Privileged Identity Management (PIM)
+# Azure Active Directory security operations for Privileged Identity Management
-The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber-attackers use credential theft attacks to target admin accounts and other privileged access accounts to try gaining access to sensitive data.
+The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber-attackers use credential theft attacks to target admin accounts and other privileged access accounts to try gaining access to sensitive data.
-For cloud services, prevention and response are the joint responsibilities of the cloud service provider and the customer.
+For cloud services, prevention and response are the joint responsibilities of the cloud service provider and the customer.
Traditionally, organizational security has focused on the entry and exit points of a network as the security perimeter. However, SaaS apps and personal devices have made this approach less effective. In Azure Active Directory (Azure AD), we replace the network security perimeter with authentication in your organization's identity layer. As users are assigned to privileged administrative roles, their access must be protected in on-premises, cloud, and hybrid environments.
-You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure cloud services, prevention and response are joint responsibilities of Microsoft as the cloud service provider and you as the customer.
+You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure cloud services, prevention and response are joint responsibilities of Microsoft as the cloud service provider and you as the customer.
* For more information on the shared responsibility model, see [Shared responsibility in the cloud](../../security/fundamentals/shared-responsibility.md). * For more information on securing access for privileged users, see [Securing Privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md).
-* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
+* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
Privileged Identity Management (PIM) is an Azure AD service that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. You can use PIM to help mitigate the following risks:
Privileged Identity Management (PIM) is an Azure AD service that enables you to
* Reduce the possibility of an unauthorized user inadvertently impacting sensitive resources.
-This article provides guidance on setting baselines, auditing sign-ins and usage of privileged accounts, and the source of audit logs you can use to help maintain the integrity of your privilege accounts.
+Use this article provides guidance to set baselines, audit sign-ins, and usage of privileged accounts. Use the source audit log source to help maintain privileged account integrity.
## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
-In the Azure portal you can view the Azure AD Audit logs and download them as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+In the Azure portal, view the Azure AD Audit logs and download them as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools to automate monitoring and alerting:
-* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* [**Azure Monitor**](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* [**Azure Event Hubs**](../../event-hubs/event-hubs-about.md) **integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hub integration.
+* [**Azure Event Hubs**](../../event-hubs/event-hubs-about.md) **integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-The rest of this article provides recommendations for setting a baseline to monitor and alert on, organized using a tier model. Links to pre-built solutions are listed following the table. You can also build alerts using the preceding tools. The content is organized into the following topic areas of PIM:
+The rest of this article has recommendations to set a baseline to monitor and alert on, with a tier model. Links to pre-built solutions appear after the table. You can build alerts using the preceding tools. The content is organized into the following areas:
* Baselines
-* Azure AD role assignment
+* Azure AD role assignment
* Azure AD role alert settings
The following are recommended baseline settings:
| What to monitor| Risk level| Recommendation| Roles| Notes | | - |- |- |- |- |
-| Azure AD roles assignment| High| <li>Require justification for activation.<li>Require approval to activate.<li>Set two-level approver process.<li>On activation, require Azure Active Directory Multi-Factor Authentication (MFA).<li>Set maximum elevation duration to 8 hrs.| <li>Privileged Role Administration<li>Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. |
-| Azure Resource Role Configuration| High| <li>Require justification for activation.<li>Require approval to activate.<li>Set two-level approver process.<li>On activation, require Azure MFA.<li>Set maximum elevation duration to 8 hrs.| <li>Owner<li>Resource Administrator<li>User Access <li>Administrator<li>Global Administrator<li>Security Administrator| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment. |
+| Azure AD roles assignment| High| Require justification for activation.Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication (MFA). Set maximum elevation duration to 8 hrs.| Privileged Role Administration, Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. |
+| Azure Resource Role Configuration| High| Require justification for activation. Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication. Set maximum elevation duration to 8 hrs.| Owner, Resource Administrator, User Access, Administrator, Global Administrator, Security Administrator| Investigate immediately if not a planned change. This setting might enable attacker access to Azure subscriptions in your environment. |
## Azure AD roles assignment
-A privileged role administrator can customize PIM in their Azure AD organization. This includes changing the experience for a user who is activating an eligible role assignment as follows:
+A privileged role administrator can customize PIM in their Azure AD organization, which includes changing the user experience of activating an eligible role assignment:
-* Prevent bad actor to remove Azure MFA requirements to activate privileged access.
+* Prevent bad actor to remove Azure AD Multi-Factor Authentication requirements to activate privileged access.
* Prevent malicious users bypass justification and approval of activating privileged access. | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. <li>This can be an indication an attacker is trying to gain privilege to modify role assignment settings<li> If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts. |
-| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml) |
-| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. <li>This can be an indication an attacker already gained access able to modify to modify role assignment settings<li>One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account. |
-| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations as this could give a clear indication of timeline for an attack. |
-| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. <li>Helps detect bad actor removing alerts associated with Azure MFA requirements to activate privileged access.<li>Helps detect suspicious or unsafe activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml) |
-
+| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. This can be an indication an attacker is trying to gain privilege to modify role assignment settings. If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. This can be an indication an attacker has access to modify role assignment settings. One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations to give a clear indication of timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. Helps detect bad actor removing alerts associated with Azure AD Multi-Factor Authentication requirements to activate privileged access. Helps detect suspicious or unsafe activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-For more information on identifying role setting changes in the Azure AD Audit log, see [View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md).
+For more information on identifying role setting changes in the Azure AD Audit log, see [View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md).
## Azure resource role assignment
-Monitoring Azure resource role assignments provides visibility into activity and activations for resources roles. These might be misused to create an attack surface to a resource. As you monitor for this type of activity, you are trying to detect:
+Monitoring Azure resource role assignments allows visibility into activity and activations for resources roles. These assignments might be misused to create an attack surface to a resource. As you monitor for this type of activity, you're trying to detect:
* Query role assignments at specific resources
Monitoring Azure resource role assignments provides visibility into activity and
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Audit Alert Resource Audit log for Privileged account activities| High| In PIM, under Azure Resources, Resource Audit| Action : Add eligible member to role in PIM completed (time bound) <br>-and-<br>Primary Target <br>-and-<br>Type User<br>-and-<br>Status = Succeeded<br>| Always alert. Helps detect bad actor adding eligible roles to manage all resources in Azure. |
-| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action : Disable Alert<br>-and-<br>Primary Target : Too many owners assigned to a resource<br>-and-<br>Status = Succeeded| Helps detect bad actor disabling alerts from Alerts pane which can bypass malicious activity being investigated |
-| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action : Disable Alert<br>-and-<br>Primary Target : Too many permanent owners assigned to a resource<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts from Alerts pane which can bypass malicious activity being investigated |
-| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action : Disable Alert<br>-and-<br>Primary Target Duplicate role created<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts from Alerts pane which can bypass malicious activity being investigated |
-
+| Audit Alert Resource Audit log for Privileged account activities| High| In PIM, under Azure Resources, Resource Audit| Action: Add eligible member to role in PIM completed (time bound) <br>-and-<br>Primary Target <br>-and-<br>Type User<br>-and-<br>Status = Succeeded<br>| Always alert. Helps detect bad actor adding eligible roles to manage all resources in Azure. |
+| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target: Too many owners assigned to a resource<br>-and-<br>Status = Succeeded| Helps detect bad actor disabling alerts, in the Alerts pane, which can bypass malicious activity being investigated |
+| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target: Too many permanent owners assigned to a resource<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts, in the Alerts pane, which can bypass malicious activity being investigated |
+| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target Duplicate role created<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts, from the Alerts pane, which can bypass malicious activity being investigated |
For more information on configuring alerts and auditing Azure resource roles, see:
For more information on configuring alerts and auditing Azure resource roles, se
## Access management for Azure resources and subscriptions
-Users or members of a group assigned to the Owner or User Access Administrator subscriptions roles, and Azure AD Global administrators that enabled subscription management in Azure AD have Resource administrator permissions by default. These administrators can assign roles, configure role settings, and review access using Privileged Identity Management (PIM) for Azure resources.
+Users or group members assigned the Owner or User Access Administrator subscriptions roles, and Azure AD Global Administrators who enabled subscription management in Azure AD, have Resource Administrator permissions by default. The administrators assign roles, configure role settings, and review access using Privileged Identity Management (PIM) for Azure resources.
-A user who has Resource administrator permissions can manage PIM for Resources. The risk this introduces that you must monitor for and mitigate, is that the capability can be used to allow bad actors to have privileged access to Azure subscription resources, such as virtual machines or storage accounts.
+A user who has Resource administrator permissions can manage PIM for Resources. Monitor for and mitigate this introduced risk: the capability can be used to allow bad actors privileged access to Azure subscription resources, such as virtual machines (VMs) or storage accounts.
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Elevations| High| Azure AD, under Manage, Properties| Periodically review setting.<br>Access management for Azure resources| Global administrators can elevate by enabling Access management for Azure resources.<br>Verify bad actors have not gained permissions to assign roles in all Azure subscriptions and management groups associated with Active Directory. |
+| Elevations| High| Azure AD, under Manage, Properties| Periodically review setting.<br>Access management for Azure resources| Global administrators can elevate by enabling Access management for Azure resources.<br>Verify bad actors haven't gained permissions to assign roles in all Azure subscriptions and management groups associated with Active Directory. |
-
-For more information see [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
+For more information, see [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
## Next steps
-See these security operations guide articles:
[Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
-[Security operations for privileged accounts](security-operations-privileged-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
-[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+[Security operations for privileged accounts](security-operations-privileged-accounts.md)
[Security operations for applications](security-operations-applications.md)
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
-# Azure Active Directory security operations for user accounts
+# Azure Active Directory security operations for user accounts
-User identity is one of the most important aspects of protecting your organization and data. This article provides guidance for monitoring account creation, deletion, and account usage. The first portion covers how to monitor for unusual account creation and deletion. The second portion covers how to monitor for unusual account usage.
+User identity is one of the most important aspects of protecting your organization and data. This article provides guidance for monitoring account creation, deletion, and account usage. The first portion covers how to monitor for unusual account creation and deletion. The second portion covers how to monitor for unusual account usage.
If you have not yet read the [Azure Active Directory (Azure AD) security operations overview](security-operations-introduction.md), we recommend you do so before proceeding.
This article covers general user accounts. For privileged accounts, see Security
## Define a baseline
-To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
+To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
-Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
+Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources for accounts created outside of normal processes. The following are suggestions to help you think about and define what normal is for your organization. * **Users account creation** ΓÇô evaluate the following:
- * Strategy and principles for tools and processes used for creating and managing user accounts. For example, are there standard attributes, formats that are applied to user account attributes.
+ * Strategy and principles for tools and processes used for creating and managing user accounts. For example, are there standard attributes, formats that are applied to user account attributes.
* Approved sources for account creation. For example, originating in Active Directory (AD), Azure Active Directory or HR systems like Workday.
Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as
* Provisioning of guest accounts and alert parameters for accounts created outside of entitlement management or other normal processes.
- * Strategy and alert parameters for accounts created, modified, or disabled by an account that is not an approved user administrator.
+ * Strategy and alert parameters for accounts created, modified, or disabled by an account that isn't an approved user administrator.
* Monitoring and alert strategy for accounts missing standard attributes, such as employee ID or not following organizational naming conventions.
Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as
* The forests, domains, and organizational units (OUs) in scope for synchronization. Who are the approved administrators who can change these settings and how often is the scope checked?
- * The types of accounts that are synchronized. For example, user accounts and or service accounts.
+ * The types of accounts that are synchronized. For example, user accounts and or service accounts.
* The process for creating privileged on-premises accounts and how the synchronization of this type of account is controlled.
For more information for securing and monitoring on-premises accounts, see [Prot
* The process to create and maintain a list of trusted individuals and or processes expected to create and manage cloud user accounts.
- * The process to create and maintained an alert strategy for non-approved cloud-based accounts.
+ * The process to create and maintained an alert strategy for non-approved cloud-based accounts.
## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
The log files you use for investigation and monitoring are:
* [UserRiskEvents log](../identity-protection/howto-identity-protection-investigate-risk.md)
-From the Azure portal you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies, including device state. This workbook enables you to view a summary, and identify the effects over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
- The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+ The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
## Account creation
Anomalous account creation can indicate a security issue. Short lived accounts,
Account creation and deletion outside of normal identity management processes should be monitored in Azure AD. Short-lived accounts are accounts created and deleted in a short time span. This type of account creation and quick deletion could mean a bad actor is trying to avoid detection by creating accounts, using them, and then deleting the account.
-Short-lived account patterns might indicate non-approved people or processes might have the right to create and delete accounts that fall outside of established processes and policies. This type of behavior removes visible markers from the directory.
+Short-lived account patterns might indicate non-approved people or processes might have the right to create and delete accounts that fall outside of established processes and policies. This type of behavior removes visible markers from the directory.
-If the data trail for account creation and deletion is not discovered quickly, the information required to investigate an incident may no longer exist. For example, accounts might be deleted and then purged from the recycle bin. Audit logs are retained for 30 days. However, you can export your logs to Azure Monitor or a security information and event management (SIEM) solution for longer term retention.
+If the data trail for account creation and deletion is not discovered quickly, the information required to investigate an incident may no longer exist. For example, accounts might be deleted and then purged from the recycle bin. Audit logs are retained for 30 days. However, you can export your logs to Azure Monitor or a security information and event management (SIEM) solution for longer term retention.
-| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
-||||--|-|
-| Account creation and deletion events within a close time frame. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br> | Search for user principal name (UPN) events. Look for accounts created and then deleted in under 24 hours.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedandDeletedinShortTimeframe.yaml) |
-| Accounts created and deleted by non-approved users or processes. | Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>and-or<br>Activity: Delete user<br>Status = success | If the actor are non-approved users, configure to send an alert. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
-| Accounts from non-approved sources. | Medium | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Target(s) = USER PRINCIPAL NAME | If the entry is not from an approved domain or is a known blocked domain, configure to send an alert. |
-| Accounts assigned to a privileged role. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
+|What to monitor|Risk Level|Where|Filter/sub-filter|Notes|
+||||||
+| Account creation and deletion events within a close time frame. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br> | Search for user principal name (UPN) events. Look for accounts created and then deleted in under 24 hours.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedandDeletedinShortTimeframe.yaml) |
+| Accounts created and deleted by non-approved users or processes. | Medium| Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>and-or<br>Activity: Delete user<br>Status = success | If the actors are non-approved users, configure to send an alert. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
+| Accounts from non-approved sources. | Medium | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Target(s) = USER PRINCIPAL NAME | If the entry isn't from an approved domain or is a known blocked domain, configure to send an alert.<br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Accountcreatedfromnon-approvedsources.yaml) |
+| Accounts assigned to a privileged role.| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
-Both privileged and non-privileged accounts should be monitored and alerted. However, since privileged accounts have administrative permissions, they should have higher priority in your monitor, alert, and respond processes.
+Both privileged and non-privileged accounts should be monitored and alerted. However, since privileged accounts have administrative permissions, they should have higher priority in your monitor, alert, and respond processes.
### Accounts not following naming policies
-User accounts not following naming policies might have been created outside of organizational policies.
+User accounts not following naming policies might have been created outside of organizational policies.
A best practice is to have a naming policy for user objects. Having a naming policy makes management easier and helps provide consistency. The policy can also help discover when users have been created outside of approved processes. A bad actor might not be aware of your naming standards and might make it easier to detect an account provisioned outside of your organizational processes.
Organizations tend to have specific formats and attributes that are used for cre
* User account UPN = Firstname.Lastname@contoso.com
-User accounts also frequently have an attribute that identifies a real user. For example, EMPID = XXXNNN. The following are suggestions to help you think about and define what normal is for your organization, as well as thing to consider when defining your baseline for log entries where accounts don't follow your organization's naming convention:
+Frequently, user accounts have an attribute that identifies a real user. For example, EMPID = XXXNNN. Use the following suggestions to help define normal for your organization, and when defining a baseline for log entries when accounts don't follow your naming convention:
-* Accounts that don't follow the naming convention. For example, `nnnnnnn@contoso.com` versus `firstname.lastname@contoso.com`.
+* Accounts that don't follow the naming convention. For example, `nnnnnnn@contoso.com` versus `firstname.lastname@contoso.com`.
-* Accounts that don't have the standard attributes populated or are not in the correct format. For example, not having a valid employee ID.
+* Accounts that don't have the standard attributes populated or aren't in the correct format. For example, not having a valid employee ID.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| User accounts that do not have expected attributes defined.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with your standard attributes either null or in the wrong format. For example, EmployeeID |
-| User accounts created using incorrect naming format.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with a UPN that does not follow your naming policy. |
-| Privileged accounts that do not follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where sign in name does not match your organizations format. For example, ADM_ as a prefix. |
-| Privileged accounts that do not follow naming policy.| High| Azure AD directory| [List Azure AD role assignments](../roles/view-assignments.md)| List roles assignments for Azure AD roles alert where UPN does not match your organizations format. For example, ADM_ as a prefix. |
--
+| User accounts that don't have expected attributes defined.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with your standard attributes either null or in the wrong format. For example, EmployeeID <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/Useraccountcreatedwithoutexpectedattributesdefined.yaml) |
+| User accounts created using incorrect naming format.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with a UPN that does not follow your naming policy. <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAccountCreatedUsingIncorrectNamingFormat.yaml) |
+| Privileged accounts that don't follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where sign-in name does not match your organizations format. For example, ADM_ as a prefix. |
+| Privileged accounts that don't follow naming policy.| High| Azure AD directory| [List Azure AD role assignments](../roles/view-assignments.md)| List roles assignments for Azure AD roles alert where UPN doesn't match your organizations format. For example, ADM_ as a prefix. |
For more information on parsing, see:
-* For Azure AD Audit logs - [Parse text data in Azure Monitor Logs](../../azure-monitor/logs/parse-text.md)
+* Azure AD Audit logs - [Parse text data in Azure Monitor Logs](../../azure-monitor/logs/parse-text.md)
-* For Azure Subscriptions - [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md)
+* Azure Subscriptions - [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md)
-* For Azure Active Directory - [List Azure AD role assignments](../roles/view-assignments.md)
+* Azure Active Directory - [List Azure AD role assignments](../roles/view-assignments.md)
### Accounts created outside normal processes Having standard processes to create users and privileged accounts is important so that you can securely control the lifecycle of identities. If users are provisioned and deprovisioned outside of established processes, it can introduce security risks. Operating outside of established processes can also introduce identity management problems. Potential risks include:
-* User and privileged accounts might not be governed to adhere to organizational policies. This can lead to a wider attack surface on accounts that are not managed correctly.
+* User and privileged accounts might not be governed to adhere to organizational policies. This can lead to a wider attack surface on accounts that aren't managed correctly.
-* It becomes harder to detect when bad actors create accounts for malicious purposes. By having valid accounts created outside of established procedures, it becomes harder to detect when accounts are created, or permissions modified for malicious purposes.
+* It becomes harder to detect when bad actors create accounts for malicious purposes. By having valid accounts created outside of established procedures, it becomes harder to detect when accounts are created, or permissions modified for malicious purposes.
We recommend that user and privileged accounts only be created following your organization policies. For example, an account should be created with the correct naming standards, organizational information and under scope of the appropriate identity governance. Organizations should have rigorous controls for who has the rights to create, manage, and delete identities. Roles to create these accounts should be tightly managed and the rights only available after following an established workflow to approve and obtain these permissions. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| User accounts created or deleted by non-approved users or processes.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>and-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Initiated by (actor) = USER PRINCIPAL NAME| Alert on accounts created by non-approved users or processes. Prioritize accounts created with heightened privileges.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
+| User accounts created or deleted by non-approved users or processes.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>and-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Initiated by (actor) = USER PRINCIPAL NAME| Alert on accounts created by non-approved users or processes. Prioritize accounts created with heightened privileges.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
| User accounts created or deleted from non-approved sources.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Target(s) = USER PRINCIPAL NAME| Alert when the domain is non-approved or known blocked domain. |
+## Unusual sign-ins
-## Unusual sign ins
-
-Seeing failures for user authentication is normal. But seeing patterns or blocks of failures can be an indicator that something is happening with a user's Identity. For example, in the case of Password spray or Brute Force attacks, or when a user account is compromised. It is critical that you monitor and alert when patterns emerge. This helps ensure you can protect the user and your organization's data.
+Seeing failures for user authentication is normal. But seeing patterns or blocks of failures can be an indicator that something is happening with a user's Identity. For example, during Password spray or Brute Force attacks, or when a user account is compromised. It's critical that you monitor and alert when patterns emerge. This helps ensure you can protect the user and your organization's data.
-Success appears to say all is well. But it can mean that a bad actor has successfully accessed a service. Monitoring successful logins helps you detect user accounts that are gaining access but are not user accounts that should have access. User authentication successes are normal entries in Azure AD Sign-Ins logs. We recommend you monitor and alert to detect when patterns emerge. This helps ensure you can protect user accounts and your organization's data.
+Success appears to say all is well. But it can mean that a bad actor has successfully accessed a service. Monitoring successful logins helps you detect user accounts that are gaining access but aren't user accounts that should have access. User authentication successes are normal entries in Azure AD Sign-Ins logs. We recommend you monitor and alert to detect when patterns emerge. This helps ensure you can protect user accounts and your organization's data.
-
-As you design and operationalize a log monitoring and alerting strategy, consider the tools available to you through the Azure portal. Identity Protection enables you to automate the detection, protection, and remediation of identity-based risks. Identity protection uses intelligence-fed machine learning and heuristic systems to detect risk and assign a risk score for users and sign ins. Customers can configure policies based on a risk level for when to allow or deny access or allow the user to securely self-remediate from a risk. The following Identity Protection risk detections inform risk levels today:
+As you design and operationalize a log monitoring and alerting strategy, consider the tools available to you through the Azure portal. Identity Protection enables you to automate the detection, protection, and remediation of identity-based risks. Identity protection uses intelligence-fed machine learning and heuristic systems to detect risk and assign a risk score for users and sign-ins. Customers can configure policies based on a risk level for when to allow or deny access or allow the user to securely self-remediate from a risk. The following Identity Protection risk detections inform risk levels today:
| What to monitor | Risk Level | Where | Filter/sub-filter | Notes | | - | - | - | - | - |
As you design and operationalize a log monitoring and alerting strategy, conside
| Suspicious inbox forwarding sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Suspicious inbox forwarding<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | | Azure AD threat intelligence sign-in risk detection| High| Azure AD Risk Detection logs| UX: Azure AD threat intelligence<br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) |
-For more information, visit [What is Identity Protection](../identity-protection/overview-identity-protection.md).
-
+For more information, visit [What is Identity Protection](../identity-protection/overview-identity-protection.md).
### What to look for Configure monitoring on the data within the Azure AD Sign-ins Logs to ensure that alerting occurs and adheres to your organization's security policies. Some examples of this are:
-* **Failed Authentications**: As humans we all get our passwords wrong from time to time. However, many failed authentications can indicate that a bad actor is trying to obtain access. Attacks differ in ferocity but can range from a few attempts per hour to a much higher rate. For example, Password Spray normally preys on easier passwords against many accounts, while Brute Force attempts many passwords against targeted accounts.
+* **Failed Authentications**: As humans we all get our passwords wrong from time to time. However, many failed authentications can indicate that a bad actor is trying to obtain access. Attacks differ in ferocity but can range from a few attempts per hour to a much higher rate. For example, Password Spray normally preys on easier passwords against many accounts, while Brute Force attempts many passwords against targeted accounts.
-* **Interrupted Authentications**: An Interrupt in Azure AD represents an injection of an additional process to satisfy authentication, such as when enforcing a control in a CA policy. This is a normal event and can happen when applications are not configured correctly. But when you see many interrupts for a user account it could indicate something is happening with that account.
+* **Interrupted Authentications**: An Interrupt in Azure AD represents an injection of a process to satisfy authentication, such as when enforcing a control in a CA policy. This is a normal event and can happen when applications aren't configured correctly. But when you see many interrupts for a user account it could indicate something is happening with that account.
- * For example, if you filtered on a user in Sign-in logs and see a large volume of sign in status = Interrupted and Conditional Access = Failure. Digging deeper it may show in authentication details that the password is correct, but that strong authentication is required. This could mean the user is not completing multi-factor authentication (MFA) which could indicate the user's password is compromised and the bad actor is unable to fulfill MFA.
+ * For example, if you filtered on a user in Sign-in logs and see a large volume of sign in status = Interrupted and Conditional Access = Failure. Digging deeper it may show in authentication details that the password is correct, but that strong authentication is required. This could mean the user isn't completing multi-factor authentication (MFA) which could indicate the user's password is compromised and the bad actor is unable to fulfill MFA.
-* **Smart lock out**: Azure AD provides a smart lockout service which introduces the concept of familiar and non-familiar locations to the authentication process. A user account visiting a familiar location might authenticate successfully while a bad actor unfamiliar with the same location is blocked after several attempts. Look for accounts that have been locked out and investigate further.
+* **Smart lock-out**: Azure AD provides a smart lock-out service which introduces the concept of familiar and non-familiar locations to the authentication process. A user account visiting a familiar location might authenticate successfully while a bad actor unfamiliar with the same location is blocked after several attempts. Look for accounts that have been locked out and investigate further.
-* **IP Changes**: It is normal to see users originating from different IP addresses. However, Zero Trust states never trust and always verify. Seeing a large volume of IP addresses and failed sign ins can be an indicator of intrusion. Look for a pattern of many failed authentications taking place from multiple IP addresses. Note, virtual private network (VPN) connections can cause false positives. Regardless of the challenges, we recommend you monitor for IP address changes and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
+* **IP changes**: It is normal to see users originating from different IP addresses. However, Zero Trust states never trust and always verify. Seeing a large volume of IP addresses and failed sign-ins can be an indicator of intrusion. Look for a pattern of many failed authentications taking place from multiple IP addresses. Note, virtual private network (VPN) connections can cause false positives. Regardless of the challenges, we recommend you monitor for IP address changes and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
-* **Locations**: Generally, you expect a user account to be in the same geographical location. You also expect sign ins from locations where you have employees or business relations. When the user account comes from a different international location in less time than it would take to travel there, it can indicate the user account is being abused. Note, VPNs can cause false positives, we recommend you monitor for user accounts signing in from geographically distant locations and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
+* **Locations**: Generally, you expect a user account to be in the same geographical location. You also expect sign-ins from locations where you have employees or business relations. When the user account comes from a different international location in less time than it would take to travel there, it can indicate the user account is being abused. Note, VPNs can cause false positives, we recommend you monitor for user accounts signing in from geographically distant locations and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
-For this risk area we recommend you monitor both standard user accounts and privileged accounts but prioritize investigations of privileged accounts. Privileged accounts are the most important accounts in any Azure AD tenant. For specific guidance for privileged accounts, see Security operations ΓÇô privileged accounts.
+For this risk area, we recommend you monitor standard user accounts and privileged accounts but prioritize investigations of privileged accounts. Privileged accounts are the most important accounts in any Azure AD tenant. For specific guidance for privileged accounts, see Security operations ΓÇô privileged accounts.
### How to detect
-You use Azure Identity Protection and the Azure AD sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine the following:
+You use Azure Identity Protection and the Azure AD sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine:
-* the parameters that you consider normal for your user base.
+* the parameters you consider normal for your user base.
* the average number of tries of a password over a time before the user calls the service desk or performs a self-service password reset.
You use Azure Identity Protection and the Azure AD sign-in logs to help discover
* how many MFA attempts you want to allow before alerting, and if it will be different for user accounts and privileged accounts.
-* if legacy authentication is enabled and your roadmap for discontinuing usage.
+* if legacy authentication is enabled and your roadmap for discontinuing usage.
* the known egress IP addresses are for your organization.
You use Azure Identity Protection and the Azure AD sign-in logs to help discover
* whether there are groups of users that remain stationary within a network location or country.
-* Identify any other indicators for unusual sign ins that are specific to your organization. For example days or times of the week or year that your organization does not operate.
+* Identify any other indicators for unusual sign-ins that are specific to your organization. For example days or times of the week or year that your organization doesn't operate.
-Once you have scoped what normal is for the types of accounts in your environment, consider the following to help determine which scenarios you want to monitor for and alert on, and to fine-tune your alerting.
+After you scope what normal is for the accounts in your environment, consider the following list to help determine scenarios you want to monitor and alert on, and to fine-tune your alerting.
* Do you need to monitor and alert if Identity Protection is configured?
Once you have scoped what normal is for the types of accounts in your environmen
Configure Identity Protection to help ensure protection is in place that supports your security baseline policies. For example, blocking users if risk = high. This risk level indicates with a high degree of confidence that a user account is compromised. For more information on setting up sign in risk policies and user risk policies, visit [Identity Protection policies](../identity-protection/concept-identity-protection-policies.md). For more information on setting up conditional access, visit [Conditional Access: Sign-in risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md).
-The following are listed in order of importance based on the impact and severity of the entries.
+The following are listed in order of importance based on the effect and severity of the entries.
### Monitoring external user sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has sucessfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID is not equal to Home Tenant ID |
-|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.
+| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml) |
+|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)
+ ### Monitoring for failed unusual sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if a number of accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
-| Smart lock-out events.| Medium - if Isolated Incident<br>High - if a number of accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml) |
-| Interrupts| Medium - if Isolated Incident<br>High - if a number of accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge was not satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
+| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
+| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml) |
+| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
-
-The following are listed in order of importance based on the impact and severity of the entries.
+The following are listed in order of importance based on the effect and severity of the entries.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| Failed authentications from countries you do not operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. |
-| Failed authentications for legacy protocols or protocols that are not used .| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
-| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
-| Increased failed authentications of any type.| Medium| Azure AD Sign-ins log| Capture increases in failures across the board. I.e., total failures for today is >10 % on the same day the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
-| Authentication occurring at times and days of the week when countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
-| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked it is still important to log and alert on this activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
-
+| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
+| Failed authentications from countries you don't operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml) |
+| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
+| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
+| Increased failed authentications of any type.| Medium| Azure AD Sign-ins log| Capture increases in failures across the board. That is, the failure total for today is >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
+| Authentication occurring at times and days of the week when countries don't conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
+| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
### Monitoring for successful unusual sign ins
- | What to monitor| Risk Level| Where| Filter/sub-filter| Notes |
+| What to monitor| Risk Level| Where| Filter/sub-filter| Notes |
| - |- |- |- |- |
-| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. |
-| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor this periodically and ensure this is the expected behavior.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml) |
+| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Discover privileged accounts not registered for MFA.| High| Azure Graph API| Query for IsMFARegistered eq false for administrator accounts. <br>[List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http)| Audit and investigate to determine if intentional or an oversight. |
-| Successful authentications from countries your organization does not operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country\>| Monitor and alert on any entries not equal to the city names you provide. |
-| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
-| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
+| Successful authentications from countries your organization doesn't operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-
-On periodic basis, we recommend you review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. Additionally, review for successful authentication increases or at unexpected times based on the location.
+We recommend you periodically review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. In addition, review for successful authentication increases or at unexpected times, based on the location.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - |- |- |- |
-| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml) |
-| Authentications at days and times of the week or year that countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries do not conduct normal business operations.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml) |
-| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. I.e., total successes for today is >10 % on the same day the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml) |
+| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications at days and times of the week or year that countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries do not conduct normal business operations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Next steps+ See these security operations guide articles: [Azure AD security operations overview](security-operations-introduction.md)
-[Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
[Security operations for privileged accounts](security-operations-privileged-accounts.md)
See these security operations guide articles:
[Security operations for applications](security-operations-applications.md) [Security operations for devices](security-operations-devices.md)
-
+ [Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Check Status Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md
To get further information than just the runs summary for a workflow, you're als
To view a status list of users processed by a workflow, which are UserProcessingResults, you'd make the following API call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults
-```
-
-By default **userProcessingResults** returns only information from the last 7 days. To get information as far back as 30 days, you would run the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=<Date range for processing results>
-```
-
-by default **userProcessingResults** returns only information from the last 7 days. To filter information as far back as 30 days, you would run the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/userProcessingResults?$filter=<Date range for processing results>
-```
-
-An example of a call to get **userProcessingResults** for a month would be as follows:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=< startedDateTime ge 2022-05-23T00:00:00Z and startedDateTime le 2022-06-22T00:00:00Z
-```
+To view a list of user processing results using API via Microsoft Graph, see: [List userProcessingResults](/graph/api/identitygovernance-workflow-list-userprocessingresults)
### User processing results using Microsoft Graph
-When multiple user events are processed by a workflow, running the **userProcessingResults** may give incomprehensible information. To get a summary of information such as total users and tasks, and failed users and tasks, Lifecycle Workflows provides a call to get count totals.
-
-To view a summary in count form, you would run the following API call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(<Date Range>)
-```
-
-An example to get the summary between May 1, and May 30, you would run the following call:
+To view a summary of user processing results via API using Microsoft Graph, see: [userProcessingResult: summary](/graph/api/identitygovernance-userprocessingresult-summary)
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-### List task processing results of a given user processing result
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults/
-```
## Run workflow history via Microsoft Graph ### List runs using Microsoft Graph
-With Microsoft Graph, you're able to get full details of workflow and user processing run information.
-
-To view a list of runs, you'd make the following API call:
+To view runs of a workflow via API using Microsoft Graph, see: [runs](/graph/api/resources/identitygovernance-run)
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs
-```
### Get a summary of runs using Microsoft Graph
-To get a summary of runs for a workflow, which includes detailed information for counts of failed runs and tasks, along with successful runs and tasks for a time range, you'd make the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=<time>,endDateTime=<time>)
-```
-An example to get a summary of runs of a workflow through the time interval of May 2022 would be as follows:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=202205-31T00:00:00Z)
-```
+To view run summary via API using Microsoft Graph, see: [run summary of a lifecycle workflow](/graph/api/identitygovernance-run-summary)
### List user and task processing results of a given run using Microsoft Graph
-With Lifecycle Workflows, you're able to check the status of each user and task who had a workflow processed for them as part of a run.
-
-
-You're also able to use **userProcessingResults** with the run call to get users processed for a run by making the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/<runId>/userProcessingResults
-```
+To get user processing result for a run of a lifecycle workflow via API using Microsoft Graph, see: [Get userProcessingResult (for a run of a lifecycle workflow)](/graph/api/identitygovernance-userprocessingresult-get)
-This API call will also return a **userProcessingResults ID** value, which can be used to retrieve task processing information in the following call:
+To list task processing results for a user processing result via API using Microsoft Graph, see: [List taskProcessingResults (for a userProcessingResult)](/graph/api/identitygovernance-userprocessingresult-list-taskprocessingresults)
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId> /runs/<runId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults
-```
> [!NOTE] > A workflow must have activity in the past 7 days to get **userProcessingResults ID**. If there has not been any activity in that time-frame, the **userProcessingResults** call will not return a value.
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
## Create a workflow using Microsoft Graph
-Workflows can be created using Microsoft Graph API. Creating a workflow using the Graph API allows you to automatically set it to enabled. Setting it to enabled is done using the `isEnabled` parameter.
-
-The table below shows the parameters that must be defined during workflow creation:
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver. Category of tasks within a workflow must also contain the category of the workflow to run. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|IsSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: A time-based attribute and an integer parameter defining when a workflow will run between -60 and a scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). |
----
-To create a joiner workflow, in Microsoft Graph, use the following request and body:
-```http
-POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows
-Content-type: application/json
-```
-
-```Request body
-{
- "category": "joiner",
- "displayName": "<Unique workflow name string>",
- "description": "<Unique workflow description>",
- "isEnabled":true,
- "tasks":[
- {
- "category": "joiner",
- "isEnabled": true,
- "taskTemplateId": "<Unique Task template>",
- "displayName": "<Unique task name>",
- "description": "<Task template description>",
- "arguments": "<task arguments>"
- }
- ],
- "executionConditions": {
- "@odata.type" : "microsoft.graph.identityGovernance.scopeAndTriggerBasedCondition",
- "trigger": {
- "@odata.type" : "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute":"<time-based trigger argument>",
- "arguments": -7
- },
- "scope": {
- "@odata.type" : "microsoft.graph.identityGovernance.ruleBasedScope",
- "rule": "employeeType eq '<Employee type attribute>' AND department -eq '<department attribute>'"
- }
- }
-}
-
-> [!NOTE]
-> time based trigger arguments can be from -60 to 60. The negative value denotes **Before** a time based argument, while a positive value denotes **After**. For example the -7 in the workflow example above denotes the workflow will run 1 week before the time-based argument happens.
-
-```
-
-To change this workflow from joiner to leaver, replace the category parameters to "leaver". To get a list of the task definitions that can be added to your workflow run the following call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/taskDefinitions
-```
-
-The response to the code will look like:
-
-```Response body
-{
- "@odata.context": "https://graph.microsoft-ppe.com/testppebetalcwpp4/$metadata#identityGovernance/lifecycleWorkflows/taskDefinitions",
- "@odata.count": 13,
- "value": [
- {
- "category": "joiner,leaver",
- "description": "Add user to a group",
- "displayName": "Add User To Group",
- "id": "22085229-5809-45e8-97fd-270d28d66910",
- "version": 1,
- "parameters": [
- {
- "name": "groupID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "joiner,leaver",
- "description": "Disable user account in the directory",
- "displayName": "Disable User Account",
- "id": "1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Enable user account in the directory",
- "displayName": "Enable User Account",
- "id": "6fc52c9d-398b-4305-9763-15f42c1676fc",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Run a custom task extension",
- "displayName": "run a Custom Task Extension",
- "id": "4262b724-8dba-4fad-afc3-43fcbb497a0e",
- "version": 1,
- "parameters":
- {
- "name": "customtaskextensionID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "joiner,leaver",
- "description": "Remove user from membership of selected Azure AD groups",
- "displayName": "Remove user from selected groups",
- "id": "1953a66c-751c-45e5-8bfe-01462c70da3c",
- "version": 1,
- "parameters": [
- {
- "name": "groupID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "joiner",
- "description": "Generate Temporary Access Password and send via email to user's manager",
- "displayName": "Generate TAP And Send Email",
- "id": "1b555e50-7f65-41d5-b514-5894a026d10d",
- "version": 1,
- "parameters": [
- {
- "name": "tapLifetimeMinutes",
- "values": [],
- "valueType": "string"
- },
- {
- "name": "tapIsUsableOnce",
- "values": [
- "true",
- "false"
- ],
- "valueType": "enum"
- }
- ]
- },
- {
- "category": "joiner",
- "description": "Send welcome email to new hire",
- "displayName": "Send Welcome Email",
- "id": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Add user to a team",
- "displayName": "Add User To Team",
- "id": "e440ed8d-25a1-4618-84ce-091ed5be5594",
- "version": 1,
- "parameters": [
- {
- "name": "teamID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "leaver",
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Remove user from membership of selected Teams",
- "displayName": "Remove user from selected Teams",
- "id": "06aa7acb-01af-4824-8899-b14e5ed788d6",
- "version": 1,
- "parameters": [
- {
- "name": "teamID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "leaver",
- "description": "Remove user from all Azure AD groups memberships",
- "displayName": "Remove user from all groups",
- "id": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
- "version": 1,
- "parameters": []
- },
- {
- "category": "leaver",
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "version": 1,
- "parameters": []
- },
- {
- "category": "leaver",
- "description": "Remove all licenses assigned to the user",
- "displayName": "Remove all licenses for user",
- "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
- "version": 1,
- "parameters": []
- }
- ]
-}
-
-```
-For further details on task definitions and their parameters, see [Lifecycle Workflow Tasks](lifecycle-workflow-tasks.md).
-
+To create a workflow using Microsoft Graph API, see [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows)
## Next steps -- [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows?view=graph-rest-beta) - [Manage a workflow's properties](manage-workflow-properties.md) - [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
After deleting workflows, you can view them on the **Deleted Workflows (Preview)
## Delete a workflow using Microsoft Graph
- You're also able to delete, view deleted, and restore deleted Lifecycle workflows using Microsoft Graph.
+
+To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta).
++
+To view
Workflows can be deleted by running the following call: ```http DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id> ``` ## View deleted workflows using Microsoft Graph
-You can view a list of deleted workflows by running the following call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows
-```
+
+To View a list of deleted workflows using API via Microsoft Graph, see: [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems).
+ ## Permanently delete a workflow using Microsoft Graph
-Deleted workflows can be permanently deleted by running the following call:
-```http
-DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>
-```
+
+To permanently delete a workflow using API via Microsoft Graph, see: [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete)
## Restore deleted workflows using Microsoft Graph
-Deleted workflows are available to be restored for 30 days before they're permanently deleted. To restore a deleted workflow, run the following API call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>/restore
-```
+To restore a deleted workflow using API via Microsoft Graph, see: [Restore a deleted workflow](/graph/api/identitygovernance-workflow-restore)
> [!NOTE] > Permanently deleted workflows are not able to be restored. ## Next steps -- [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta) - [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md) - [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md)
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
Azure AD, in conjunction with Azure Monitor, provides several reports to help yo
* An administrator, or a catalog owner, can [retrieve the list of users who have access package assignments](entitlement-management-access-package-assignments.md), via the Azure portal, Graph or PowerShell. * You can also send the audit logs to Azure Monitor and view a history of [changes to the access package](entitlement-management-logs-and-reporting.md#view-events-for-an-access-package), in the Azure portal, or via PowerShell.
-* You can view the last 30 days of sign ins to an application in the [sign ins report](../reports-monitoring/howto-find-activity-reports.md#sign-ins-report) in the Azure portal, or via [Graph](/graph/api/signin-list?view=graph-rest-1.0&tabs=http).
+* You can view the last 30 days of sign ins to an application in the [sign ins report](../reports-monitoring/howto-find-activity-reports.md#sign-ins-report) in the Azure portal, or via [Graph](/graph/api/signin-list?view=graph-rest-1.0&tabs=http&preserve-view=true).
* You can also send the [sign in logs to Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md) to archive sign in activity for up to two years. ## Monitor to adjust entitlement management policies and access as needed
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
To edit the properties of a workflow using the Azure portal, you'll do the follo
## Edit the properties of a workflow using Microsoft Graph
-To view the list of current workflows you'll run the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/
-```
-
-Lifecycle workflows can have their basic information such as "displayName", "description", and "isEnabled" edited by running this patch call and body.
-
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-Content-type: application/json
-
-{
- "displayName":"<Unique workflow name string>",
- "description":"<workflow description>",
- "isEnabled":<ΓÇ£trueΓÇ¥ or ΓÇ£falseΓÇ¥>,
-}
-
-```
+To update a workflow via API using Microsoft Graph, see: [Update workflow](/graph/api/identitygovernance-workflow-update)
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
To edit the execution conditions of a workflow using the Azure portal, you'll do
## Create a new version of an existing workflow using Microsoft Graph
-As stated above, creating a new version of a workflow is required to change any parameter that isn't "displayName", "description", or "isEnabled". Unlike in the Azure portal, to create a new version of a workflow using Microsoft Graph requires some additional steps. First, run the API call with the changes to the body of the workflow you want to update by doing the following:
--- Get the body of the workflow you want to create a new version of by running the API call:
- ```http
- GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>
- ```
-- Copy the body of the returned workflow excluding the **id**, **"odata.context**, and **tasks@odata.context** portions of the returned workflow body. -- Make the changes in tasks and execution conditions you want for the new version of the workflow.-- Run the following **createNewVersion** API call along with the updated body of the workflow. The workflow body is wrapped in a **Workflow:{}** block.
- ```http
- POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/createNewVersion
- Content-type: application/json
-
- {
- "workflow": {
- "displayName":"New version of a workflow",
- "description":"This is a new created version of a workflow",
- "isEnabled":"true",
- "tasks":[
- {
- "isEnabled":"true",
- "taskTemplateId":"70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "displayName":"Send welcome email to new hire",
- "description":"Sends welcome email to a new hire",
- "executionSequence": 1,
- "arguments":[]
- },
- {
- "isEnabled":"true",
- "taskTemplateId":"22085229-5809-45e8-97fd-270d28d66910",
- "displayName":"Add user to group",
- "description":"Adds user to a group.",
- "executionSequence": 2,
- "arguments":[
- {
- "name":"groupID",
- "value":"<group id value>"
- }
- ]
- }
- ],
- "executionConditions": {
- "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
- "scope": {
- "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
- "rule": "(department eq 'sales')"
- },
- "trigger": {
- "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute": "employeeHireDate",
- "offsetInDays": -2
- }
- }
- }
- ```
+To create a new version of a workflow via API using Microsoft Graph, see: [workflow: createNewVersion](/graph/api/identitygovernance-workflow-createnewversion)
### List workflow versions using Microsoft Graph
-Once a new version of a workflow is created, you can always find other versions by running the following call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions
-```
-Or to get a specific version:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions/<version number>
-```
-
-### Reorder Tasks in a workflow using Microsoft Graph
-
-If you want to reorder tasks in a workflow, so that certain tasks run before others, you'll follow these steps:
- 1. Use a GET call to return the body of the workflow in which you want to reorder the tasks.
- ```http
- GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflow id>
- ```
- 1. Copy the body of the workflow and paste it in the body section for the new API call.
-
- 1. Tasks are run in the order they appear within the workflow. To update the task copy the one you want to run first in the workflow body, and place it above the tasks you want to run after it in the workflow.
-
- 1. Run the **createNewVersion** API call.
+To list workflow versions via API using Microsoft Graph, see: [List versions (of a lifecycle workflow)](/graph/api/identitygovernance-workflow-list-versions)
## Next steps - - [Check status of a workflows](check-status-workflow.md) - [Customize workflow schedule](customize-workflow-schedule.md)
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
Use the following steps to run a workflow on-demand.
## Run a workflow on-demand using Microsoft Graph
-Running a workflow on-demand using Microsoft Graph requires users to manually be added by their user ID with a POST call.
-
-To run a workflow on-demand in Microsoft Graph, use the following request and body:
-```http
-POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/activate
-Content-type: application/json
-```
-
-```Request body
-{
- "subjects":[
- {"id":"<userid>"},
- {"id":"<userid>"}
- ]
-}
-
-```
+To run a workflow on-demand using API via Microsoft Graph, see: [workflow: activate (run a workflow on-demand)](/graph/api/identitygovernance-workflow-activate).
## Next steps -- [workflow: activate (run a workflow on-demand)](/graph/api/identitygovernance-workflow-activate?view=graph-rest-beta) - [Customize the schedule of workflows](customize-workflow-schedule.md) - [Delete a Lifecycle workflow](delete-lifecycle-workflow.md)
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
The manager attribute is used for email notification tasks. It's used by the li
:::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png":::
-For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context).
+For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http&preserve-view=true) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context).
### Enabling the Temporary Access Pass (TAP) A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch
+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: [https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch](how-to-connect-syncservice-features.md#blocksoftmatch)
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
We detect risk on workload identities across sign-in behavior and offline indica
| | | | | Azure AD threat intelligence | Offline | This risk detection indicates some activity that is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | | Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
-| Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). | | Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. | | Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
-| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that might be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but has not disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
## Identify risky workload identities
active-directory How To View Associated Resources For An Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-associated-resources-for-an-identity.md
https://management.azure.com/subscriptions/{resourceID of user-assigned identity
| Parameter | Example |Description | ||||
-| $filter | ```'type' eq 'microsoft.cognitiveservices/account' and contains(name, 'test')``` | An OData expression that allows you to filter any of the available fields: name, type, resourceGroup, subscriptionId, subscriptionDisplayName<br/><br/>The following operations are supported: ```and```, ```or```, ```eq``` and ```contains``` |
+| $filter | ```type eq 'microsoft.cognitiveservices/account' and contains(name, 'test')``` | An OData expression that allows you to filter any of the available fields: name, type, resourceGroup, subscriptionId, subscriptionDisplayName<br/><br/>The following operations are supported: ```and```, ```or```, ```eq``` and ```contains``` |
| $orderby | ```name asc``` | An OData expression that allows you to order by any of the available fields | | $skip | 50 | The number of items you want to skip while paging through the results. | | $top | 10 | The number of resources to return. 0 will return only a count of the resources. |
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 7.17.0
+- Confluence: 7.0.1 to 7.19.0
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/copy-metadataurl.png) +++
+1. The Name ID attribute in Azure AD can be mapped to any desired user attribute by editing the Attributes & Claims section.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to edit Attributes and Claims.](common/edit-attribute.png)
+
+ a. After clicking on Edit, any desired user attribute can be mapped by clicking on Unique User Identifier (Name ID).
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing the NameID in Attributes and Claims.](common/attribute-nameID.png)
+
+ b. On the next screen, the desired attribute name like user.userprincipalname can be selected as an option from the Source Attribute dropdown menu.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to select Attributes and Claims.](common/attribute-select.png)
+
+ c. The selection can then be saved by clicking on the Save button at the top.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to save Attributes and Claims.](common/attribute-save.png)
+
+ d. Now, the user.userprincipalname attribute source in Azure AD is mapped to the Name ID attribute name in Azure AD which will be compared with the username attribute in Atlassian by the SSO plugin.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to review Attributes and Claims.](common/attribute-review.png)
+
+ > [!NOTE]
+ > The SSO service provided by Microsoft Azure supports SAML authentication which is able to perform user identification using different attributes such as givenname (first name), surname (last name), email (email address), and user principal name (username). We recommend not to use email as an authentication attribute as email addresses are not always verified by Azure AD. The plugin compares the values of Atlassian username attribute with the NameID attribute in Azure AD in order to determine the valid user authentication.
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
active-directory Lucid All Products Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucid-all-products-provisioning-tutorial.md
# Tutorial: Configure Lucid (All Products) for automatic user provisioning
-This tutorial describes the steps you need to perform in both Lucid (All Products) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Lucid (All Products)](https://www.lucid.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Lucid (All Products) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Lucid (All Products)](https://lucid.co/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
The plug-in supports the following versions of Jira and Confluence:
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) * Confluence: 5.0 to 5.10 * Confluence: 6.0.1 to 6.15.9
-* Confluence: 7.0.1 to 7.17.0
+* Confluence: 7.0.1 to 7.19.0
## Installation
The plug-in supports these versions:
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) * Confluence: 5.0 to 5.10 * Confluence: 6.0.1 to 6.15.9
-* Confluence: 7.0.1 to 7.17.0
+* Confluence: 7.0.1 to 7.19.0
### Is the plug-in free or paid?
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
Content-type: application/json
{ "id": "f5bf2fc6-7135-4d94-a6fe-c26e4543bc5a",
- "servicePrincipal": "90e10a26-94cd-49d6-8cd7-cacb10f00686",
+ "verifiableCredentialServicePrincipalId": "90e10a26-94cd-49d6-8cd7-cacb10f00686",
+ "verifiableCredentialRequestServicePrincipalId": "870e10a26-94cd-49d6-8cd7-cacb10f00fe",
+ "verifiableCredentialAdminServicePrincipalId": "760e10a26-94cd-49d6-8cd7-cacb10f00ab",
"status": "Enabled" } ```
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Verified ID architecture and the component
## Prerequisites - You need an Azure tenant with an active subscription. If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) permission for the directory you want to configure.
+- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) permission for the directory you want to configure. If you're not the global administrator, you will need permission [application administrator](../../active-directory/roles/permissions-reference.md#application-administrator) to complete the app registration including granting admin consent.
+- Ensure that you have the [contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the Azure subscription or the resource group that you will deploy Azure Key Vault in.
## Create a key vault
To add the required permissions, follow these steps:
1. Select **APIs my organization uses**.
-1. Search for the **Verifiable Credentials Service Request** service principal, and select it.
+1. Search for the **Verifiable Credentials Service Request** and **Verifiable Credentials Service** service principals, and select them.
![Screenshot that shows how to select the service principal.](media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png)
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Azure Advisor helps you optimize and reduce your overall Azure spend by identify
1. On the **Advisor** dashboard, select the **Cost** tab.
-## Optimize virtual machine spend by resizing or shutting down underutilized instances
+## Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances
-Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines.
+Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines or virtual machine scale sets.
-Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines. The recommended actions are shut down or resize, specific to the resource being evaluated.
+Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines and virtual machine scale sets. The recommended actions are shut down or resize, specific to the resource being evaluated.
### Shutdown recommendations
-Advisor identifies resources that have not been used at all over the last 7 days and makes a recommendation to shut them down.
+Advisor identifies resources that haven't been used at all over the last 7 days and makes a recommendation to shut them down.
-- Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** is not considered since weΓÇÖve found that **CPU** and **Outbound Network utilization** are sufficient.
+- Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we've found that **CPU** and **Outbound Network utilization** are sufficient.
- The last 7 days of utilization data are analyzed-- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins)
+- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances.
- A shutdown recommendation is created if: - P95th of the maximum value of CPU utilization summed across all cores is less than 3%. - P100 of average CPU in last 3 days (sum over all cores) <= 2%
Advisor identifies resources that have not been used at all over the last 7 days
### Resize SKU recommendations
-Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates).
+Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates). On virtual machine scale sets, Advisor recommends resizing when it's possible to fit the current load on a more appropriate cheaper SKU, or a lower number of instances of the same SKU.
- Recommendation criteria include **CPU**, **Memory** and **Outbound Network utilization**. - The last 7 days of utilization data are analyzed-- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins)-- An appropriate SKU is determined based on the following criteria:
- - Performance of the workloads on the new SKU should not be impacted.
+- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations.
+- An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria:
+ - Performance of the workloads on the new SKU shouldn't be impacted.
- Target for user-facing workloads: - P95 of CPU and Outbound Network utilization at 40% or lower on the recommended SKU - P100 of Memory utilization at 60% or lower on the recommended SKU - Target for non user-facing workloads: - P95 of the CPU and Outbound Network utilization at 80% or lower on the new SKU - P100 of Memory utilization at 80% or lower on the new SKU
- - The new SKU has the same Accelerated Networking and Premium Storage capabilities
- - The new SKU is supported in the current region of the Virtual Machine with the recommendation
- - The new SKU is less expensive
+ - The new SKU, if applicable, has the same Accelerated Networking and Premium Storage capabilities
+ - The new SKU, if applicable, is supported in the current region of the Virtual Machine with the recommendation
+ - The new SKU, if applicable, is less expensive
+ - Instance count recommendations also take into account if the virtual machine scale set is being managed by Service Fabric or AKS. For service fabric managed resources, recommendations take into account reliability and durability tiers.
- Advisor determines if a workload is user-facing by analyzing its CPU utilization characteristics. The approach is based on findings by Microsoft Research. You can find more details here: [Prediction-Based Power Oversubscription in Cloud Platforms - Microsoft Research](https://www.microsoft.com/research/publication/prediction-based-power-oversubscription-in-cloud-platforms/).-- Advisor recommends not just smaller SKUs in the same family (for example D3v2 to D2v2) but also SKUs in a newer version (for example D3v2 to D2v3) or a different family (for example D3v2 to E3v2) based on the best fit and the cheapest costs with no performance impacts.
+- Based on the best fit and the cheapest costs with no performance impacts, Advisor not only recommends smaller SKUs in the same family (for example D3v2 to D2v2), but also SKUs in a newer version (for example D3v2 to D2v3), or a different family (for example D3v2 to E3v2).
+- For virtual machine scale set resources, Advisor prioritizes instance count recommendations over SKU change recommendations because instance count changes are easily actionable, resulting in faster savings.
### Burstable recommendations We evaluate if workloads are eligible to run on specialized SKUs called **Burstable SKUs** that support variable workload performance requirements and are less expensive than general purpose SKUs. Learn more about burstable SKUs here: [B-series burstable - Azure Virtual Machines](../virtual-machines/sizes-b-series-burstable.md). -- A burstable SKU recommendation is made if:
+A burstable SKU recommendation is made if:
+ - The average **CPU utilization** is less than a burstable SKUs' baseline performance - If the P95 of CPU is less than two times the burstable SKUs' baseline performance
- - If the current SKU does not have accelerated networking enabled (burstable SKUs donΓÇÖt support accelerated networking yet)
+ - If the current SKU doesn't have accelerated networking enabled, since burstable SKUs don't support accelerated networking yet
- If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days-- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU. +
+The resulting recommendation suggests that a user should resize their current virtual machine or virtual machine scale set to a burstable SKU with the same number of cores. This suggestion is made so a user can take advantage of lower cost and also the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU.
-Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU information.
-To be more selective about the actioning on underutilized virtual machines, you can adjust the CPU utilization rule on a per-subscription basis.
+Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU/instance count information.
+To be more selective about the actioning on underutilized virtual machines or virtual machine scale sets, you can adjust the CPU utilization rule on a per-subscription basis.
-There are cases where the recommendations cannot be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
-- Virtual machine has been provisioned to accommodate upcoming traffic-- Virtual machine uses other resources not considered by the resize algo, i.e. metrics other than CPU, Memory and Network
+In some cases recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
+- Virtual machine or virtual machine scale set has been provisioned to accommodate upcoming traffic
+- Virtual machine or virtual machine scale set uses other resources not considered by the resize algo, such as metrics other than CPU, Memory and Network
- Specific testing being done on the current SKU, even if not utilized efficiently-- Need to keep VM SKUs homogeneous -- VM being utilized for disaster recovery purposes
+- Need to keep virtual machine or virtual machine scale set SKUs homogeneous
+- Virtual machine or virtual machine scale set being utilized for disaster recovery purposes
-In such cases simply use the Dismiss/Postpone options associated with the recommendation.
+In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
-We are constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
+We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
## Next steps
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
To learn more, visit [How to filter Advisor recommendations using tags](advisor-
## January 2022
-[**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
+[**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
Improvements include:
Improvements include:
![vm-right-sizing-recommendation](media/advisor-overview/advisor-vm-right-sizing.png)
-Read the [How-to guide](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) to learn more.
+Read the [How-to guide](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances) to learn more.
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
With a variable set that contains the service principal ID, now reset the credentials using [az ad sp credential reset][az-ad-sp-credential-reset]. The following example lets the Azure platform generate a new secure secret for the service principal. This new secure secret is also stored as a variable. ```azurecli-interactive
-SP_SECRET=$(az ad sp credential reset --name "$SP_ID" --query password -o tsv)
+SP_SECRET=$(az ad sp credential reset --id "$SP_ID" --query password -o tsv)
``` Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster.
In this article, the service principal for the AKS cluster itself and the Azure
[az-ad-sp-credential-list]: /cli/azure/ad/sp/credential#az_ad_sp_credential_list [az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az_ad_sp_credential_reset [node-image-upgrade]: ./node-image-upgrade.md
-[node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
+[node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
This article shows you how to install the Network Policy engine and create Kuber
## Before you begin
-You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Overview of Network Policy
Azure provides two ways to implement Network Policy. You choose a Network Policy
* Azure's own implementation, called *Azure Network Policy Manager (NPM)*. * *Calico Network Policies*, an open-source network and network security solution founded by [Tigera][tigera].
-Azure NPM for Linux uses Linux *IPTables* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable filter rules.
+Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules.
## Differences between Azure NPM and Calico Network Policy and their capabilities | Capability | Azure NPM | Calico Network Policy | ||-|--|
-| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 |
+| Supported platforms | Linux, Windows Server 2022 | Linux, Windows Server 2019 and 2022 |
| Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) | | Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. | | Logging | Logs available with **kubectl log -n kube-system <network-policy-pod>** command | For more information, see [Calico component logs][calico-logs] |
+## Limitations:
+
+Azure Network Policy Manager(NPM) doesn't support IPv6. Otherwise, Azure NPM fully supports the network policy spec in Linux.
+* In Windows, Azure NPM doesn't support the following:
+ * named ports
+ * SCTP protocol
+ * negative match label or namespace selectors (e.g. all labels except "debug=true")
+ * "except" CIDR blocks (a CIDR with exceptions)
+
+>[!NOTE]
+> * Azure NPM pod logs will record an error if an unsupported policy is created.
+ ## Create an AKS cluster and enable Network Policy To see network policies in action, let's create an AKS cluster that supports network policy and then work on adding policies.
The following example script:
Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
-### Create an AKS cluster with Azure NPM enabled
+### Create an AKS cluster with Azure NPM enabled - Linux only
-In this section, we will work on creating a cluster with Linux node pools and Azure NPM enabled.
+In this section, we'll work on creating a cluster with Linux node pools and Azure NPM enabled.
To begin, you should replace the values for *$RESOURCE_GROUP_NAME* and *$CLUSTER_NAME* variables.
az aks create \
--network-policy azure ```
+### Create an AKS cluster with Azure NPM enabled - Windows Server 2022 (Preview)
+
+In this section, we'll work on creating a cluster with Windows node pools and Azure NPM enabled.
+
+Please execute the following commands prior to creating a cluster:
+
+```azurecli
+ az extension add --name aks-preview
+ az extension update --name aks-preview
+ az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview
+ az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview
+ az provider register -n Microsoft.ContainerService
+```
+
+> [!NOTE]
+> At this time, Azure NPM with Windows nodes is available on Windows Server 2022 only
+>
+
+Now, you should replace the values for *$RESOURCE_GROUP_NAME*, *$CLUSTER_NAME* and *$WINDOWS_USERNAME* variables.
+
+```azurecli-interactive
+$RESOURCE_GROUP_NAME=myResourceGroup-NP
+$CLUSTER_NAME=myAKSCluster
+$WINDOWS_USERNAME=myWindowsUserName
+$LOCATION=canadaeast
+```
+
+Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell).
+
+```azurecli-interactive
+echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME
+```
+
+Use the following command to create a cluster:
+
+```azurecli
+az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 1 \
+ --windows-admin-username $WINDOWS_USERNAME \
+ --network-plugin azure \
+ --network-policy azure
+```
+
+It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example:
+
+```azurecli
+az aks nodepool add \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --cluster-name $CLUSTER_NAME \
+ --os-type Windows \
+ --name npwin \
+ --node-count 1
+```
++ ### Create an AKS cluster for Calico network policies Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the Network Policy. Using *calico* as the Network Policy enables Calico networking on both Linux and Windows node pools.
When the cluster is ready, configure `kubectl` to connect to your Kubernetes clu
```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME ```
-To begin verification of Network Policy, we will create a sample application and set traffic rules.
+To begin verification of Network Policy, we'll create a sample application and set traffic rules.
Firstly, let's create a namespace called *demo* to run the example pods:
Firstly, let's create a namespace called *demo* to run the example pods:
kubectl create namespace demo ```
-We will now create two pods in the cluster named *client* and *server*.
+We'll now create two pods in the cluster named *client* and *server*.
>[!NOTE] > If you want to schedule the *client* or *server* on a particular node, add the following bit before the *--command* argument in the pod creation [kubectl run][kubectl-run] command:
Now, in the client's shell, verify connectivity with the server by executing the
/agnhost connect <server-ip>:80 --timeout=3s --protocol=tcp ```
-Connectivity with traffic will be blocked since the server is labeled with app=server, but the client is not labeled. The connect command above will yield this output:
+Connectivity with traffic will be blocked since the server is labeled with app=server, but the client isn't labeled. The connect command above will yield this output:
```output TIMEOUT
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
-[dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip
+[dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
Azure Active Directory B2C is a cloud identity management solution for consumer-
In this tutorial, you'll learn the configuration required in your API Management service to integrate with Azure Active Directory B2C. As noted later in this article, if you are using the deprecated legacy developer portal, some steps will differ.
+For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+ > [!IMPORTANT] > * This article has been updated with steps to configure an Azure AD B2C app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)). > * If you previously configured an Azure AD B2C app for user sign-in using the Azure AD Authentication Library (ADAL), we recommend that you [migrate to MSAL](#migrate-to-msal).
-For information about enabling access to the developer portal by using classic Azure Active Directory, see [How to authorize developer accounts using Azure Active Directory](api-management-howto-aad.md).
- ## Prerequisites * An Azure Active Directory B2C tenant in which to create an application. For more information, see [Azure Active Directory B2C overview](../active-directory-b2c/overview.md).
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
In this article, you'll learn how to:
> * Enable access to the developer portal for users from Azure Active Directory (Azure AD). > * Manage groups of Azure AD users by adding external groups that contain the users.
+For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+ > [!IMPORTANT] > * This article has been updated with steps to configure an Azure AD app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)). > * If you previously configured an Azure AD app for user sign-in using the Azure AD Authentication Library (ADAL), we recommend that you [migrate to MSAL](#migrate-to-msal).
+
## Prerequisites
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
API Management provides the capability to secure access to APIs (i.e., client to API Management) using client certificates. You can validate certificates presented by the connecting client and check certificate properties against desired values using policy expressions.
-For information about securing access to the back-end service of an API using client certificates (i.e., API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md)
+For information about securing access to the back-end service of an API using client certificates (i.e., API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md).
+
+For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
+ > [!IMPORTANT] > To receive and verify client certificates over HTTP/2 in the Developer, Basic, Standard, or Premium tiers you must turn on the "Negotiate client certificate" setting on the "Custom domains" blade as shown below.
api-management Api Management Howto Protect Backend With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-protect-backend-with-aad.md
# Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory
-In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
+In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
+
+For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
## Prerequisites
api-management Self Hosted Gateway V0 V1 Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md
Your service is affected by this change if:
* Your service is in the Developer or Premium service tier. * You have deployed a self-hosted gateway using the version v0 or v1 of the self-hosted gateway [container image](../self-hosted-gateway-migration-guide.md#using-the-new-configuration-api).
+### Assessing impact with Azure Advisor
+
+In order to make the migration easier, we have introduced new Azure Advisor recommendations:
+
+- **Use self-hosted gateway v2** recommendation - Identifies Azure API Management instances where the usage of self-hosted gateway v0.x or v1.x was identified.
+- **Use Configuration API v2 for self-hosted gateways** recommendation - Identifies Azure API Management instances where the usage of Configuration API v1 for self-hosted gateway was identified.
+
+We highly recommend customers to use ["All Recommendations" overview in Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/All) to determine if a migration is required. Use the filtering options to see if one of the above recommendations is present.
+ ## What is the deadline for the change? **Support for the v1 configuration API and for the v0 and v1 container images of the self-hosted gateway will retire on 1 October 2023.**
If you have questions, get answers from community experts in [Microsoft Q&A](htt
## Next steps
-See all [upcoming breaking changes and feature retirements](overview.md).
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Developer Portal Basic Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-basic-authentication.md
In the developer portal for Azure API Management, the default authentication method for users is to provide a username and password. In this article, learn how to set up users with basic authentication credentials to the developer portal.
+For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+ ## Prerequisites
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
+For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
++ ## Aims We're going to see how API Management can be used in a simplified scenario with Azure Functions and Azure AD B2C. You'll create a JavaScript (JS) app calling an API, that signs in users with Azure AD B2C. Then you'll use API Management's validate-jwt, CORS, and Rate Limit By Key policy features to protect the Backend API.
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
More information about this threat: [API10:2019 Insufficient logging and monito
## Next steps
+* [Authentication and authorization in API Management](authentication-authorization-overview.md)
* [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline) * [Security controls by Azure policy](security-controls-policy.md) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
api-management Troubleshoot Response Timeout And Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/troubleshoot-response-timeout-and-errors.md
General strategies for mitigating SNAT port exhaustion are discussed in [Trouble
### Scale your APIM instance
-Each API Management instance is allocated a number of SNAT ports, based on APIM units. You can allocate additional SNAT ports by scaling your API Management instance with additional units. For more info, see [Scale your API Management service](upgrade-and-scale.md#scale-your-api-management-service)
+Each API Management instance is allocated a number of SNAT ports, based on APIM units. You can allocate additional SNAT ports by scaling your API Management instance with additional units. For more info, see [Scale your API Management service](upgrade-and-scale.md#scale-your-api-management-instance).
> [!NOTE] > SNAT port usage is currently not available as a metric for autoscaling API Management units.
See [API Management access restriction policies](api-management-access-restricti
## See also * [Azure Load Balancer: Troubleshooting outbound connections failures](../load-balancer/troubleshoot-outbound-connection.md)
-* [Azure App Service: Troubleshooting intermittent outbound connection errors](../app-service/troubleshoot-intermittent-outbound-connection-errors.md)
+* [Azure App Service: Troubleshooting intermittent outbound connection errors](../app-service/troubleshoot-intermittent-outbound-connection-errors.md)
api-management Upgrade And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Upgrade and scale an Azure API Management instance | Microsoft Docs
-description: This topic describes how to upgrade and scale an Azure API Management instance.
-
+description: This article describes how to upgrade and scale an Azure API Management instance.
-+ -- Previously updated : 04/20/2020+ Last updated : 09/14/2022 # Upgrade and scale an Azure API Management instance
-Customers can scale an Azure API Management instance by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per month. This number does not represent a call limit, but rather a maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency.
+Customers can scale an Azure API Management instance in a dedicated service tier by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per second. This number doesn't represent a call limit, but rather an estimated maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency.
+
-Capacity and price of each unit depends on the **tier** in which the unit exists. You can choose between four tiers: **Developer**, **Basic**, **Standard**, **Premium**. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance does not allow adding more units, you need to upgrade to a higher-level tier.
+> [!NOTE]
+> API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier.
-The price of each unit and the available features (for example, multi-region deployment) depends on the tier that you chose for your API Management instance. The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article, explains the price per unit and features you get in each tier.
+The throughput and price of each unit depend on the [service tier](api-management-features.md) in which the unit exists. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance doesn't allow adding more units, you need to upgrade to a higher-level tier.
>[!NOTE]
->The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article shows approximate numbers of unit capacity in each tier. To get more accurate numbers, you need to look at a realistic scenario for your APIs. See the [Capacity of an Azure API Management instance](api-management-capacity.md) article.
+>See [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) for features, scale limits, and estimated throughput in each tier. To get more accurate throughput numbers, you need to look at a realistic scenario for your APIs. See [Capacity of an Azure API Management instance](api-management-capacity.md).
## Prerequisites To follow the steps from this article, you must:
-+ Have an active Azure subscription.
-
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-
-+ Have an API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
++ Have an API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). + Understand the concept of [Capacity of an Azure API Management instance](api-management-capacity.md). - ## Upgrade and scale
-You can choose between four tiers: **Developer**, **Basic**, **Standard**, and **Premium**. The **Developer** tier should be used to evaluate the service; it should not be used for production. The **Developer** tier does not have SLA and you cannot scale this tier (add/remove units).
+You can choose between four dedicated tiers: **Developer**, **Basic**, **Standard**, and **Premium**.
-**Basic**, **Standard**, and **Premium** are production tiers that have SLA and can be scaled. The **Basic** tier is the cheapest tier with an SLA and it can be scaled up to two units, **Standard** tier can be scaled to up to four units. You can add any number of units to the **Premium** tier.
+* The **Developer** tier should be used to evaluate the service; it shouldn't be used for production. The **Developer** tier doesn't have SLA and you can't scale this tier (add/remove units).
-The **Premium** tier enables you to distribute a single Azure API Management instance across any number of desired Azure regions. When you initially create an Azure API Management service, the instance contains only one unit and resides in a single Azure region. The initial region is designated as the **primary** region. Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the **primary** region and five units in some other region. You can tailor the number of units to the traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
+* **Basic**, **Standard**, and **Premium** are production tiers that have SLA and can be scaled. For pricing details and scale limits, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/#pricing).
-You can upgrade and downgrade to and from any tier. Upgrading or downgrading can remove some features - for example, VNETs or multi-region deployment, when downgrading to Standard or Basic from the Premium tier.
+* The **Premium** tier enables you to distribute a single Azure API Management instance across any number of desired Azure regions. When you initially create an Azure API Management service, the instance contains only one unit and resides in a single Azure region (the **primary** region).
-> [!NOTE]
-> The upgrade or scale process can take from 15 to 45 minutes to apply. You get notified when it is done.
+ Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the primary region and five units in some other region. You can tailor the number of units to the traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
+
+* You can upgrade and downgrade to and from any dedicated service tier. Downgrading can remove some features. For example, downgrading to Standard or Basic from the Premium tier can remove virtual networks or multi-region deployment.
> [!NOTE]
-> API Management service in the **Consumption** tier scales automatically based on the traffic.
+> The upgrade or scale process can take from 15 to 45 minutes to apply. You get notified when it is done.
-## Scale your API Management service
+## Scale your API Management instance
![Scale API Management service in Azure portal](./media/upgrade-and-scale/portal-scale.png)
-1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
-2. Select **Locations** from the menu.
-3. Click on the row with the location you want to scale.
-4. Specify the new number of **units** - either use the slider or type the number.
-5. Click **Apply**.
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Locations** from the menu.
+1. Select the row with the location you want to scale.
+1. Specify the new number of **Units** - use the slider if available, or type the number.
+1. Select **Apply**.
+
+> [!NOTE]
+> In the Premium service tier, you can optionally configure availability zones and a virtual network in a selected location. For more information, see [Deploy API Management service to an additional location](api-management-howto-deploy-multi-region.md#-deploy-api-management-service-to-an-additional-location).
## Change your API Management service tier
-1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
-2. Click on the **Pricing tier** in the menu.
-3. Select the desired service tier from the dropdown. Use the slider to specify the scale of your API Management service after the change.
-4. Click **Save**.
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Pricing tier** in the menu.
+1. Select the desired service tier from the dropdown. Use the slider to specify the number of units for your API Management service after the change.
+1. Select **Save**.
## Downtime during scaling up and down
-If you are scaling from or to the Developer tier, there will be downtime. Otherwise, there is no downtime.
+If you're scaling from or to the Developer tier, there will be downtime. Otherwise, there is no downtime.
## Compute isolation
-If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, please [create a support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
+If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
## Next steps
app-service Deploy Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md
The code examples in this section assume you are deploying an ASP.NET web app. Y
Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/ecosystems/ecosystems).
-# [Classic](#tab/yaml/)
+# [YAML](#tab/yaml/)
1. Sign in to your Azure DevOps organization and navigate to your project.
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
-description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 using the migration feature
+ Title: Use the migration feature to migrate your App Service Environment to App Service Environment v3
+description: Learn how to migrate your App Service Environment to App Service Environment v3 using the migration feature
Previously updated : 4/27/2022 Last updated : 9/15/2022 zone_pivot_groups: app-service-cli-portal
-# Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
+# Use the migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3
-An App Service Environment v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+An App Service Environment v1 and v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the
## 1. Get your App Service Environment ID
-Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource group with your values for the App Service Environment you want to migrate.
+Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource groups with your values for the App Service Environment you want to migrate. "ASE_RG" and "VNET_RG" will be the same if your virtual network and App Service Environment are in the same resource group.
```azurecli ASE_NAME=<Your-App-Service-Environment-name>
-ASE_RG=<Your-Resource-Group>
+ASE_RG=<Your-ASE-Resource-Group>
+VNET_RG=<Your-VNet-Resource-Group>
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv) ```
The following command will check whether your App Service Environment is support
az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation" ```
-If there are no errors, your migration is supported and you can continue to the next step.
+If there are no errors, your migration is supported, and you can continue to the next step.
## 3. Generate IP addresses for your new App Service Environment v3
Run the following command to check the status of this step.
az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status ```
-If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to get your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to view your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
```azurecli az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021-02-01"
az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021
## 4. Update dependent resources with new IPs
-Don't move on to migration immediately after completing the previous step. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates.
+Using the new IPs, update any of your resources or networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't migrate until you've completed this step.
## 5. Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com). ```azurecli
-az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+az network vnet subnet update --resource-group $VNET_RG -name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
```
-## 6. Migrate to App Service Environment v3
+## 6. Prepare your configurations
+
+You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). This can be done by setting the `zoneRedundant` property to "true". Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). If you don't want to configure zone redundancy, don't include the `zoneRedundant` parameter.
+
+If your existing App Service Environment uses a custom domain suffix, you'll need to [configure one for your new App Service Environment v3 during the migration process](./migrate.md#choose-your-app-service-environment-v3-configurations). Migration will fail if you don't configure a custom domain suffix and are using one currently. Migration will also fail if you attempt to add a custom domain suffix during migration to an environment that doesn't have one configured currently. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
+
+If your migration doesn't include a custom domain suffix and you aren't enabling zone redundancy, you can move on to migration.
+
+In order to set these configurations, create a file called "parameters.json" with the following details based on your scenario. Don't include the custom domain suffix properties if this feature doesn't apply to your migration. Be sure to pay attention to the value of the `zoneRedundant` property as this configuration is irreversible after migration. Ensure the value of the `kind` property is set based on your existing App Service Environment version. Accepted values for the `kind` property are "ASEV1" and "ASEV2".
+
+If you're migrating without a custom domain suffix and are enabling zone redundancy:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "sample-ase-migration",
+ "kind": "ASEV2",
+ "location": "westcentralus",
+ "properties": {
+ "zoneRedundant": true
+ }
+}
+```
+
+If you're using a user assigned managed identity for your custom domain suffix configuration and **are enabling zone redundancy**:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "sample-ase-migration",
+ "kind": "ASEV2",
+ "location": "westcentralus",
+ "properties": {
+ "zoneRedundant": true,
+ "customDnsSuffixConfiguration": {
+ "dnsSuffix": "internal-contoso.com",
+ "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",
+ "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity"
+ }
+ }
+}
+```
+
+If you're using a system assigned managed identity for your custom domain suffix configuration and **aren't enabling zone redundancy**:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "sample-ase-migration",
+ "kind": "ASEV2",
+ "location": "westcentralus",
+ "properties": {
+ "customDnsSuffixConfiguration": {
+ "dnsSuffix": "internal-contoso.com",
+ "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",
+ "keyVaultReferenceIdentity": "SystemAssigned"
+ }
+ }
+}
+```
+
+## 7. Migrate to App Service Environment v3
+
+Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. During that time, there will be about one hour of application downtime. Scaling, deployments, and modifications to your existing App Service Environment will be blocked during this step.
-Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours and during that time there will be about one hour of application downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+Only include the "body" parameter in the command if you're enabling zone redundancy and/or are configuring a custom domain suffix. If neither of those configurations apply to your migration, you can remove the parameter from the command.
```azurecli
-az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration"
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration" --body @parameters.json
``` Run the following command to check the status of your migration. The status will show as "Migrating" while in progress.
Run the following command to check the status of your migration. The status will
az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status ```
-Once you get a status of "Ready", migration is done and you have an App Service Environment v3. Your apps will now be running in your new environment.
+Once you get a status of "Ready", migration is done, and you have an App Service Environment v3. Your apps will now be running in your new environment.
Get the details of your new environment by running the following command or by navigating to the [Azure portal](https://portal.azure.com).
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
From the [Azure portal](https://portal.azure.com), navigate to the **Migration** page for the App Service Environment you'll be migrating. You can do this by clicking on the banner at the top of the **Overview** page for your App Service Environment or by clicking the **Migration** item on the left-hand side.
-![migration access points](./media/migration/portal-overview.png)
:::image type="content" source="./media/migration/portal-overview.png" alt-text="Migration access points."::: On the migration page, the platform will validate if migration is supported for your App Service Environment. If your environment isn't supported for migration, a banner will appear at the top of the page and include an error message with a reason. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the error messages you may see if you aren't eligible for migration. If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you won't be able to use the migration feature. If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
If migration is supported for your App Service Environment, you'll be able to pr
## 2. Generate IP addresses for your new App Service Environment v3
-Under **Get new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. You won't be able to scale or make changes to your existing App Service Environment during this time. If after 15 minutes you don't see your new IP addresses, select refresh as shown in the sample to allow your new IP addresses to appear.
-
+Under **Get new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. You won't be able to scale or make changes to your existing App Service Environment during this time.
## 3. Update dependent resources with new IPs
App Service Environment v3 requires the subnet it's in to have a single delegati
:::image type="content" source="./media/migration/subnet-delegation-ux.png" alt-text="Subnet delegation using the portal.":::
-## 5. Migrate to App Service Environment v3
+## 5. Choose your configurations
+
+You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). Select **Enabled** if you'd like to configure zone redundancy.
++
+If your environment is in a region that doesn't support zone redundancy, the checkbox will be disabled. If you need a zone redundant App Service Environment v3, use one of the manual migration options and create your new App Service Environment v3 in one of the regions that supports zone redundancy.
+
+If your existing App Service Environment uses a [custom domain suffix](./migrate.md#choose-your-app-service-environment-v3-configurations), you'll be required to configure one for your new App Service Environment v3. You'll be shown the custom domain suffix configuration options if this situation applies to you. You won't be able to migrate until you provide the required information. If you'd like to use a custom domain suffix but don't currently have one configured, you can configure one once migration is complete. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
++
+After you add your custom domain suffix details, the "Migrate" button will be enabled.
-Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours and during that time there will be about one hour of application downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
-When migration is complete, you'll have an App Service Environment v3 and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
+## 6. Migrate to App Service Environment v3
+
+Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+
+When migration is complete, you'll have an App Service Environment v3, and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
+
+If your migration included a custom domain suffix, for App Service Environment v3, the custom domain will no longer be shown in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously.
+ ::: zone-end
When migration is complete, you'll have an App Service Environment v3 and all of
> [!div class="nextstepaction"] > [App Service Environment v3 Networking](networking.md)+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 7/29/2022 Last updated : 9/15/2022 # Migration to App Service Environment v3 using the migration feature
-App Service can now automate migration of your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [manual migration options documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+App Service can now automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
App Service can now automate migration of your App Service Environment v2 to an
## Supported scenarios
-At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
+At this time, App Service Environment migrations to v3 using the migration feature are supported in the following regions:
- Australia East - Australia Central
At this time, App Service Environment migrations to v3 using the migration featu
- West US - West US 3
+The following App Service Environment configurations can be migrated using the migration feature. The table gives the App Service Environment v3 configuration you'll end up with when using the migration feature based on your existing App Service Environment. All supported App Service Environments can be migrated to a [zone redundant App Service Environment v3](../../availability-zones/migrate-app-service-environment.md) using the migration feature as long as the environment is [in a region that supports zone redundancy](./overview.md#regions). You can [configure zone redundancy](#choose-your-app-service-environment-v3-configurations) during the migration process.
+
+|Configuration |App Service Environment v3 Configuration |
+||--|
+|[Internal Load Balancer (ILB)](create-ilb-ase.md) App Service Environment v2 |ILB App Service Environment v3 |
+|[External (ELB/internet facing with public IP)](create-external-ase.md) App Service Environment v2 |ELB App Service Environment v3 |
+|ILB App Service Environment v2 with a custom domain suffix |ILB App Service Environment v3 with a custom domain suffix |
+|ILB App Service Environment v1 |ILB App Service Environment v3 |
+|ELB App Service Environment v1 |ELB App Service Environment v3 |
+|ILB App Service Environment v1 with a custom domain suffix |ILB App Service Environment v3 with a custom domain suffix |
+
+If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
+ You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment. ## Migration feature limitations
-With the current version of the migration feature, your new App Service Environment will be placed in the existing subnet that was used for your old environment. Internet facing App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+The following are limitations when using the migration feature:
+
+- Your new App Service Environment v3 will be placed in the existing subnet that was used for your old environment.
+- You can't change the region your App Service Environment is located in.
+- ELB App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+- If your existing App Service Environment uses a custom domain suffix, you'll have to configure custom domain suffix for your App Service Environment v3 during the migration process.
+ - If you no longer want to use a custom domain suffix, you can remove it once the migration is complete.
-Note that App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported.
+App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported.
-- Sending SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25. - Monitoring your traffic with Network Watcher or NSG Flow. - Configuring an IP-based TLS/SSL binding with your apps.
-The following scenarios aren't supported in this version of the feature:
+The following scenarios aren't supported by the migration feature. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
-- App Service Environment v2 -> Zone Redundant App Service Environment v3-- App Service Environment v1-- App Service Environment v1 -> Zone Redundant App Service Environment v3-- ILB App Service Environment v2 with a custom domain suffix-- ILB App Service Environment v1 with a custom domain suffix-- Internet facing App Service Environment v2 with IP SSL addresses-- Internet facing App Service Environment v1 with IP SSL addresses
+- App Service Environment v1 in a [Classic VNet](/previous-versions/azure/virtual-network/create-virtual-network-classic)
+- ELB App Service Environment v2 with IP SSL addresses
+- ELB App Service Environment v1 with IP SSL addresses
- [Zone pinned](zone-redundancy.md) App Service Environment v2 - App Service Environment in a region not listed in the supported regions
-The migration feature doesn't plan on supporting App Service Environment v1 within a Classic VNet. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into this category.
- The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you won't be able to migrate until you make the needed updates. ### Troubleshooting
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to be available in your region. | |Migration cannot be called on this ASE, please contact support for help migrating. |Support will need to be engaged for migrating this App Service Environment. This is potentially due to custom settings used by this environment. |Engage support to resolve your issue. |
-|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2s that are zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate cannot be called if IP SSL is enabled on any of the sites|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate is not available for this kind|App Service Environment v1 can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Full migration cannot be called before IP addresses are generated|You'll see this error if you attempt to migrate before finishing the pre-migration steps. |Ensure you've completed all pre-migration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). |
-|Migration to ASEv3 is not allowed for this ASE|You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2 that is zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Full migration cannot be called before IP addresses are generated. |You'll see this error if you attempt to migrate before finishing the pre-migration steps. |Ensure you've completed all pre-migration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). |
+|Migration to ASEv3 is not allowed for this ASE. |You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. |
-|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location|You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
+|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location. |You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](using-an-ase.md#upgrade-preference) from the Azure portal. |Wait until the upgrade finishes and then migrate. |
+|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You'll be able to migrate once these operations are complete. |
## Overview of the migration process using the migration feature
Migration consists of a series of steps that must be followed in order. Key poin
### Generate IP addresses for your new App Service Environment v3
-The platform will create the [new inbound IP (if you're migrating an internet facing App Service Environment) and the new outbound IP](networking.md#addresses) addresses. While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 15 minutes to complete.
+The platform will create the [new inbound IP (if you're migrating an ELB App Service Environment) and the new outbound IP](networking.md#addresses) addresses. While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 15 minutes to complete.
When completed, you'll be given the new IPs that will be used by your future App Service Environment v3. These new IPs have no effect on your existing environment. The IPs used by your existing environment will continue to be used up until your existing environment is shut down during the migration step. ### Update dependent resources with new IPs
-Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and so on, in preparation for the migration. For public internet facing App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs, in preparation for the migration. For ELB App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
### Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration won't succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
+### Choose your App Service Environment v3 configurations
+
+Your App Service Environment v3 can be deployed across availability zones in the regions that support it. This architecture is known as [zone redundancy](../../availability-zones/migrate-app-service-environment.md). Zone redundancy can only be configured during App Service Environment creation. If you want your new App Service Environment v3 to be zone redundant, enable the configuration during the migration process. Any App Service Environment that is using the migration feature to migrate can be configured as zone redundant as long as you're using a [region that supports zone redundancy for App Service Environment v3](./overview.md#regions). If you're existing environment is using a region that doesn't support zone redundancy, the configuration option will be disabled and you won't be able to configure it. The migration feature doesn't support changing regions. If you'd like to use a different region, use one of the [manual migration options](migration-alternatives.md).
+
+> [!NOTE]
+> Enabling zone redundancy can lead to additional charges. Review the [zone redundancy pricing model](../../availability-zones/migrate-app-service-environment.md#pricing) for more information.
+>
+
+If your existing App Service Environment uses a custom domain suffix, you'll be prompted to configure a custom domain suffix for your new App Service Environment v3. You'll need to provide the custom domain name, managed identity, and certificate. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). You must configure a custom domain suffix for your new environment even if you no longer want to use it. Once migration is complete, you can remove the custom domain suffix configuration if needed.
+
+If your migration includes a custom domain suffix, for App Service Environment v3, the custom domain will no longer be shown in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly.
+ ### Migrate to App Service Environment v3
-After updating all dependent resources with your new IPs and properly delegating your subnet, you should continue with migration as soon as possible.
+After completing the previous steps, you should continue with migration as soon as possible.
-During migration, which requires up to a three hour service window, the following events will occur:
+During migration, which requires up to a three hour service window for App Service Environment v2 to v3 migrations and up to a six hour service window depending on environment size for v1 to v3 migrations, scaling and environment configurations are blocked and the following events will occur:
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3.-- All App Service plans in the App Service Environment are converted from Isolated to Isolated v2.-- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime during this period.
+- All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 SKU.
+- All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.
- If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration).-- The public addresses that are used by the App Service Environment will change to the IPs identified during the previous step.
+- The public addresses that are used by the App Service Environment will change to the IPs generated during the IP generation step.
-As in the IP generation step, you won't be able to scale or modify your App Service Environment or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
+As in the IP generation step, you won't be able to scale, modify your App Service Environment, or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
> [!NOTE] > Due to the conversion of App Service plans from Isolated to Isolated v2, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
There's no cost to migrate your App Service Environment. You'll stop being charg
- **What if migrating my App Service Environment is not currently supported?** You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). This doc will be updated as additional regions and supported scenarios become available. - **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the three hour service window during the migration step so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the three to six hour service window during the migration step, so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
- You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+ The migration feature supports this [migration scenario](#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after.
- **What if my App Service Environment is zone pinned?**
- Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
+ Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. To migrate to App Service Environment v3, see the [manual migration options](migration-alternatives.md).
- **What properties of my App Service Environment will change?**
- You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+ You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
- **What happens if migration fails or there is an unexpected issue during the migration?** If there's an unexpected issue, support teams will be on hand. It's recommended to migrate dev environments before touching any production environments. - **What happens to my old App Service Environment?**
- If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible. A rollback to the old environment will not be possible.
+ If you decide to migrate an App Service Environment using the migration feature, the old environment gets shut down, deleted, and all of your apps are migrated to a new environment. Your old environment will no longer be accessible. A rollback to the old environment won't be possible.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain. ## Next steps > [!div class="nextstepaction"]
-> [Migrate App Service Environment v2 to App Service Environment v3](how-to-migrate.md)
+> [Migrate your App Service Environment to App Service Environment v3](how-to-migrate.md)
> [!div class="nextstepaction"] > [Manually migrate to App Service Environment v3](migration-alternatives.md)
There's no cost to migrate your App Service Environment. You'll stop being charg
> [!div class="nextstepaction"] > [Using an App Service Environment v3](using.md)+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 5/4/2022 Last updated : 9/15/2022 # Migrate to App Service Environment v3
App Service Environment v3 uses Isolated v2 App Service plans that are priced an
The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
-The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given in the following screenshot.
+The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. You can select a backup and use that to restore the app to an App Service in your App Service Environment v3.
-![back up and restore sample](./media/migration/back-up-restore-sample.png)
|Benefits |Limitations | |||
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown. 1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 SKU details](overview.md#pricing).
-![clone sample](./media/migration/portal-clone-sample.png)
|Benefits |Limitations | |||
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
## Manually create your apps on an App Service Environment v3
-If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
+If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
-You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
+You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, navigate to your App Service and go to **Export template** under **Automation**.
-![export from toc](./media/migration/export-toc.png)
You can also export templates for multiple resources directly from your resource group by going to your resource group, selecting the resources you want a template for, and then selecting **Export template**.
-![export template sample](./media/migration/export-template-sample.png)
The following initial changes to your Azure Resource Manager templates are required to get your apps onto your App Service Environment v3: -- Update SKU parameters for App Service plan to an Isolated v2 plan as shown below
+- Update SKU parameters for App Service plan to an Isolated v2 plan:
```json "type": "Microsoft.Web/serverfarms",
Once your migration and any testing with your new environment is complete, delet
- **Do I need to change anything about my apps to get them to run on App Service Environment v3?** No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. - **What if my App Service Environment has a custom domain suffix?**
- The migration feature doesn't support migration of App Service Environments with custom domain suffixes at this time. You won't be able to migrate until it's supported.
+ The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after that.
- **What if my App Service Environment is zone pinned?**
- Zone pinning isn't a supported feature on App Service Environment v3. Use [zone redundancy](overview-zone-redundancy.md) instead.
+ Zone pinning isn't a supported feature on App Service Environment v3.
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
Once your migration and any testing with your new environment is complete, delet
> [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md) > [!div class="nextstepaction"]
-> [Migrate to App Service Environment v3 by using the migration feature](migrate.md)
+> [Migrate to App Service Environment v3 using the migration feature](migrate.md)
+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
In [Azure App Service](overview.md), you can easily restore app backups. You can also make on-demand custom backups or configure scheduled custom backups. You can restore a backup by overwriting an existing app by restoring to a new app or slot. This article shows you how to restore a backup and make custom backups.
-Back up and restore **Standard**, **Premium**, **Isolated**. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
+Backup and restore are supported in **Basic**, **Standard**, **Premium**, and **Isolated** tiers. For **Basic** tier, only the production slot can be backed up and restored. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
> [!NOTE]
-> Support for custom and automatic backups in **Basic** tier (production slot only) and in App Service environments (ASE) V2 and V3 is in preview. For App Service environments:
+> Support in App Service environments (ASE) V2 and V3 is in preview. For App Service environments:
> > - Backups can be restored to a target app within the ASE itself, not in another ASE. > - Backups can be restored to a target app in another App Service plan in the ASE.
There are two types of backups in App Service. Automatic backups made for your a
| Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](../mysql/index.yml), [Azure Database for PostgreSQL](../postgresql/index.yml), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). | | [Storage account](../storage/index.yml) required | No. | Yes. | | Backup frequency | Hourly, not configurable. | Configurable. |
-| Retention | 30 days, not configurable. | 0-30 days or indefinite. |
+| Retention | 30 days, not configurable. <br>- Days 1-3: hourly backups retained.<br>- Days 4-14: every 3 hourly backup retained.<br>- Days 15-30: every 6 hourly backup retained. | 0-30 days or indefinite. |
| Downloadable | No. | Yes, as Azure Storage blobs. | | Partial backups | Not supported. | Supported. |
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
To test the custom domain, navigate to it in the browser.
## Renew the domain
-The App Service domain you bought is valid for one year from the time of purchase. By default, the domain is configured to renew automatically by charging your payment method for the next year. You can manually renew your domain name.
+The App Service domain you bought is valid for one year from the time of purchase. You can configure to renew your domain automatically which will charge your payment method when your domain renews the following year. You can also manually renew your domain name.
-If you want to turn off automatic renewal, or if you want to manually renew your domain, follow the steps here.
+If you want to configure automatic renewal, or if you want to manually renew your domain, follow the steps here.
1. In the search bar, search for and select **App Service Domains**.
If you want to turn off automatic renewal, or if you want to manually renew your
1. In the **App Service Domains** section, select the domain you want to configure.
-1. From the left navigation of the domain, select **Domain renewal**. To stop renewing your domain automatically, select **Off**. The setting takes effect immediately.
+1. From the left navigation of the domain, select **Domain renewal**. To start renewing your domain automatically, select **On**, otherwise select **Off**. The setting takes effect immediately. If automatic renewal is enabled, on the day after your domain expiration date, Azure attempts to bill you for the domain name renewal.
![Screenshot that shows the option to automatically renew your domain.](./media/custom-dns-web-site-buydomains-web-app/dncmntask-cname-buydomains-autorenew.png)
If you want to turn off automatic renewal, or if you want to manually renew your
> When navigating away from the page, disregard the "Your unsaved edits will be discarded" error by clicking **OK**. >
-To manually renew your domain, select **Renew domain**. However, this button is not active until [90 days before the domain's expiration](#when-domain-expires).
+To manually renew your domain, select **Renew domain**. However, this button is not active until 90 days before the domain's expiration date.
-If your domain renewal is successful, you receive an email notification within 24 hours.
-
-## When domain expires
-
-Azure deals with expiring or expired App Service domains as follows:
-
-* If automatic renewal is disabled: 90 days before domain expiration, a renewal notification email is sent to you and the **Renew domain** button is activated in the portal.
-* If automatic renewal is enabled: On the day after your domain expiration date, Azure attempts to bill you for the domain name renewal.
-* If an error occurs during automatic renewal (for example, your card on file is expired), or if automatic renewal is disabled and you allow the domain to expire, Azure notifies you of the domain expiration and parks your domain name. You can [manually renew](#renew-the-domain) your domain.
-* On the 4th and 12th days day after expiration, Azure sends you additional notification emails. You can [manually renew](#renew-the-domain) your domain. On the 5th day after expiration, DNS resolution stops for the expired domain.
-* On the 19th day after expiration, your domain remains on hold but becomes subject to a redemption fee. You can call customer support to renew your domain name, subject to any applicable renewal and redemption fees.
-* On the 25th day after expiration, Azure puts your domain up for auction with a domain name industry auction service. You can call customer support to renew your domain name, subject to any applicable renewal and redemption fees.
-* On the 30th day after expiration, you're no longer able to redeem your domain.
+If your domain renewal is successful, you receive an email notification within 24 hours.
<a name="custom"></a>
availability-zones Migrate Workload Aks Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-workload-aks-mysql.md
+
+ Title: Migrate Azure Kubernetes Service and MySQL Flexible Server workloads to availability zone support
+description: Learn how to migrate Azure Kubernetes Service and MySQL Flexible Server workloads to availability zone support.
+++ Last updated : 08/29/2022++++
+
+# Migrate Azure Kubernetes Service (AKS) and MySQL Flexible Server workloads to availability zone support
+
+This guide describes how to migrate an Azure Kubernetes Service and MySQL Flexible Server workload to complete availability zone support across all dependent services. For complete list of all workload dependencies, see [Workload service dependencies](#workload-service-dependencies).
+
+Availability zone support for this workload must be enabled during the creation of your AKS cluster or MySQL Flexible Server. If you want availability zone support for an existing AKS cluster and MySQL Flexible Server, you'll need to redeploy those resources.
+
+This migration guidance focuses mainly on the infrastructure and availability considerations of running the following architecture on Azure:
++++
+## Workload service dependencies
+
+To provide full workload support for availability zones, each service dependency in the workload must support availability zones.
+
+There are two approaches types of availability zone supported
+
+The AKS and MySQL workload architecture consists of the following component dependencies:
+
+### Azure Kubernetes Service (AKS)
+
+- *Zonal* : The system node pool and user node pools are zonal when you pre-select the zones in which the node pools are deployed during creation time. We recommend that you pre-select all three zones for better resiliency. More user node pools that support availability zones can be added to an existing AKS cluster and by supplying a value for the `zones` parameter.
+
+- *Zone-redundant*: Kubernetes control plane components such as *etcd*, *API server*, *Scheduler*, and *Controller Manager* are automatically replicated or distributed across zones.
+
+ >[!NOTE]
+ >To enable zone-redundancy of the AKS cluster control plane components, you must define your default system node pool with zones when you create an AKS cluster. Adding more zonal node pools to an existing non-zonal AKS cluster won't make the AKS cluster zone-redundant, because that action doesn't distribute the control plane components across zones after-the-fact.
+
+### Azure Database for MySQL Flexible Server
+
+- *Zonal*: The zonal availability mode means that a standby server is always available within the same zone as the primary server. While this option reduces failover time and network latency, it's less resilient due to a single zone outage impacting both the primary and standby servers.
+
+- *Zone-redundant*: The zone-redundant availability mode means that a standby server is always available within another zone in the same region as the primary server. Two zones will be enabled for zone redundancy for the primary and standby servers. We recommend this configuration for better resiliency.
++
+### Azure Standard Load Balancer or Azure Application Gateway
+
+#### Standard Load Balancer
+To understand considerations related to Standard Load Balancer resources, see [Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md).
+
+- *Zone-redundant*: Choosing zone-redundancy is the recommended way to configure your Frontend IP with your existing Load Balancer. The zone-redundant front-end corresponds with the AKS cluster back-end pool, which is distributed across multiple zones.
+
+- *Zonal*: If you're pinning your node pools to specific zones such as zone 1 and 2, you can pre-select zone 1 and 2 for your Frontend IP in the existing Load Balancer. The reason why you may want to pin your node pools to specific zones could be due to the availability of specialized VM SKU series such as M-series.
+
+#### Azure Application Gateway
+
+Using the Application Gateway Ingress Controller add-on with your AKS cluster is supported only on Application Gateway v2 SKUs (Standard and WAF). To understand further considerations related to Azure Application Gateway, see [Scaling Application Gateway v2 and WAF v2](../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+
+*Zonal*: To use the benefits of availability zones, we recommend that the Application Gateway resource be created in multiple zones, such as zone 1, 2, and 3. Select all three zones for best intra-region resiliency strategy. However, to correspond to your backend node pools, you may pin your node pools to specific zones by pre-selecting zone 1 and 2 during the creation of your App Gateway resource. The reason why you may want to pin your node pools to specific zones could be due to the availability of specialized VM SKU series such as `M-series`.
+
+#### Zone Redundant Storage (ZRS)
+
+- We recommend that your AKS cluster is configured with managed ZRS disks because they're zone-redundant resources. Volumes can be scheduled on all zones.
+
+- Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a `PersistentVolumeClaim` object referencing an Azure Managed Disk in a multi-zone AKS cluster. Kubernetes will take care of scheduling any pod that claims this PVC in the correct availability zone.
+
+- For Azure Database for SQL, we recommend that the data and log files are hosted in zone-redundant storage (ZRS). These files are replicated to the standby server via the storage-level replication available with ZRS.
+
+#### Azure Firewall
+
+*Zonal*: To use the benefits of availability zones, we recommend that the Application Gateway resource be created in multiple zones, such as zone 1, 2, and 3. We recommend that you select all three zones for best intra-region resiliency strategy.
+
+#### Azure Bastion
+
+*Regional*: Azure Bastion is deployed within VNets or peered VNets and is associated to an Azure region. For more information, se [Bastion FAQ](../bastion/bastion-faq.md#dr).
+
+#### Azure Container Registry (ACR)
+
+*Zone-redundant*: We recommend that you create a zone-redundant registry in the Premium service tier. You can also create a zone-redundant registry replica by setting the `zoneRedundancy` property for the replica. To learn how to enable zone redundancy for your ACR, see [Enable zone redundancy in Azure Container Registry for resiliency and high availability](../container-registry/zone-redundancy.md).
+
+#### Azure Cache for Redis
+
+*Zone-redundant*: Azure Cache for Redis supports zone-redundant configurations in the Premium and Enterprise tiers. A zone-redundant cache places its nodes across different availability zones in the same region.
+
+#### Azure Active Directory (AD)
+
+*Global*: Azure AD is a global service with multiple levels of internal redundancy and automatic recoverability. Azure AD is deployed in over 30 datacenters around the world that provide availability zones where present. This number is growing rapidly as more regions are deployed.
+
+#### Azure Key Vault
+
+*Regional*: Azure Key Vault is deployed in a region. To maintain high durability of your keys and secrets, the contents of your key vault are replicated within the region and to a secondary region within the same geography.
+
+*Zone-redundant*: For Azure regions with availability zones and no region pair, Key Vault uses zone-redundant storage (ZRS) to replicate the contents of your key vault three times within the single location/region.
+
+## Workload considerations
+
+### Azure Kubernetes Service (AKS)
+
+- Pods can communicate with other pods, regardless of which node or the availability zone in which the pod lands on the node. Your application may experience higher response time if the pods are located in different availability zones. While the extra round-trip latencies between pods are expected to fall within an acceptable range for most applications, there are application scenarios which require low latency, especially for a chatty communication pattern between pods.
+
+- We recommend that you test your application to ensure it performs well across availability zones.
+
+- For performance reasons such low latency, pods can be co-located in the same data center within the same availability zone. To co-locate pods in the same data center within the same availability zone, you can create user node pools with a unique zone and proximity placement group. You can add a proximity placement group (PPG) to an existing AKS cluster by creating a new agent node pool and specifying the PPG. Use Pod Topology Spread Constraints to control how pods are spread in your AKS cluster across availability zones, nodes and regions.
+
+- After pods that require low latency communication are co-located in the same availability zone, communications between the pods aren't direct. Instead, pod communications are channeled through a service that defines a logical set of pods in your AKS cluster. Pods can be configured to talk to AKS and the communication to the service will be automatically load-balanced to all the pods that are members of the service.
+
+- To take advantage of availability zones, node pools contain underlying VMs that are zonal resources. To support applications that have different compute or storage demands, you can create user node pools with specific VM sizes when you create the user node pool.
+
+ For example, you may decide to use the `Standard_M32ms` under the `M-series` for your user nodes because the microservices in your application require high throughput, low latency, and memory optimized VM sizes that provide high vCPU counts and large amounts of memory. Depending on the deployment region, when you select the VM size in the Azure portal, you may see that this VM size is supported only in zone 1 and 2. You can accept this resiliency configuration as a trade-off for high performance.
+
+- You can't change the VM size of a node pool after you create it. For more information on node pool limitations, see [Limitations](../aks/use-multiple-node-pools.md#limitations).
+
+### Azure Database for MySQL Flexible Server
+
+The implication of deploying your node pools in specific zones, such as zone 1 and 2, is that all service dependencies of your AKS cluster must also support zone 1 and 2. In this workload architecture, your AKS cluster has a service dependency on Azure Database for MySQL Flexible Servers with zone resiliency. You would select zone 1 for your primary server and zone 2 for your standby server to be co-located with your AKS user node pools.
+++
+### Azure Cache for Redis
+
+- Azure Cache for Redis distributes nodes in a zone-redundant cache in a round-robin manner over the availability zones that you've selected.
+
+- You can't update an existing Premium cache to use zone redundancy. To use zone redundancy, you must recreate the Azure Cache for Redis.
+
+- To achieve optimal resiliency, we recommend that you create your Azure Cache for Redis with three, or more replicas so that you can distribute the replicas across three availability zones.
+++
+## Disaster recovery considerations
+
+*Availability zones* are used for better resiliency to achieve high availability of your workload within the primary region of your deployment.
+
+*Disaster Recovery* consists of recovery operations and practices defined in your business continuity plan. Your business continuity plan addresses both how your workload recovers during a disruptive event and how it fully recovers after the event. Consider extending your deployment to an alternative region.
+++
+For your application tier, please review the business continuity and disaster recovery considerations for AKS in this article.
+
+- Consider running multiple AKS clusters in alternative regions. The alternative region can use a secondary paired region. Or, where there's no region pairing for your primary region, you can select an alternative region based on your consideration for available services, capacity, geographical proximity, and data sovereignty. Please review the [Azure regions decision guide](/azure/cloud-adoption-framework/migrate/azure-best-practices/multiple-regions). Also review the [deployment stamp pattern](/azure/architecture/patterns/deployment-stamp).
+
+- You have the option of configuring active-active, active-standby, active-passive for your AKS clusters.
+
+- For your database tier, disaster recovery features include geo-redundant backups with the ability to initiate geo-restore and deploying read replicas in a different region.
+
+- During an outage, you'll need to decide whether to initiate a recovery. You'll need to initiate recovery operations only when the outage is likely to last longer than your workloadΓÇÖs recovery time objective (RTO). Otherwise, you'll wait for service recovery by checking the service status on the Azure Service Health Dashboard. On the Service Health blade of the Azure portal, you can view any notifications associated with your subscription.
+
+- When you do initiate recovery with the geo-restore feature in Azure Database for MySQL, a new database server is created using backup data that is replicated from another region.
++
+## Next Steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
availability-zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/overview.md
Last updated 02/08/2022 -+ # Resiliency in Azure **Resiliency** is a systemΓÇÖs ability to recover from failures and continue to function. ItΓÇÖs not only about avoiding failures but also involves responding to failures in a way that minimizes downtime or data loss. Because failures can occur at various levels, itΓÇÖs important to have protection for all types based on your service availability requirements. Resiliency in Azure supports and advances capabilities that respond to outages in real time to ensure continuous service and data protection assurance for mission-critical applications that require near-zero downtime and high customer confidence.
-Azure includes built-in resiliency services that you can leverage and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve resiliency. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
+Azure includes built-in resiliency services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve resiliency. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
## Resiliency requirements The required level of resilience for any Azure solution depends on several considerations. Availability and latency SLA and other business requirements drive the architectural choices and resiliency level and should be considered first. Availability requirements range from how much downtime is acceptable ΓÇô and how much it costs your business ΓÇô to the amount of money and time that you can realistically invest in making an application highly available.
-Building resilient systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the additional complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA.
+Building resilient systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the more complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA.
## Building resiliency
-You should define your applicationΓÇÖs availability requirements at the beginning of planning. Many applications do not need 100% high availability; being aware of this can help to optimize costs during non-critical periods. Identify the type of failures an application can experience as well as the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application resiliency and recovery against unexpected failure.
+You should define your applicationΓÇÖs availability requirements at the beginning of planning. If you know which applications don't need 100% high availability during certain periods of time, you can optimize costs during those non-critical periods. Identify the type of failures an application can experience, and the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application resiliency and recovery against unexpected failure.
The following checklist covers the scope of resiliency planning.
The following checklist covers the scope of resiliency planning.
## Regions and availability zones
-Regions and Availability Zones are a big part of the resiliency equation. Regions feature multiple, physically separate Availability Zones, connected by a high-performance network featuring less than 2ms latency between physical zones to help your data stay synchronized and accessible when things go wrong. You can leverage this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy.
+Regions and Availability Zones are a big part of the resiliency equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy.
-Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy. See [Azure regions and availability zones](az-overview.md) for more information.
+Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy. For more information, see [Azure regions and availability zones](az-overview.md).
## Shared responsibility
-Building resilient systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. See [Business continuity management program in Azure](business-continuity-management-program.md) for more information.
+Building resilient systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. For more information, see [Business continuity management program in Azure](business-continuity-management-program.md).
## Azure service dependencies
azure-app-configuration Enable Dynamic Configuration Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
ms.devlang: csharp Previously updated : 11/17/2019 Last updated : 09/14/2022 Azure Functions
Azure Functions support running [in-process](../azure-functions/functions-dotnet
> [!TIP] > When you are updating multiple key-values in App Configuration, you normally don't want your application to reload configuration before all changes are made. You can register a *sentinel key* and update it only when all other configuration changes are completed. This helps to ensure the consistency of configuration in your application. >
- > You may also do following to minimize the risk of inconsistencies:
+ > You may also do the following to minimize the risk of inconsistencies:
> > * Design your application to be tolerable for transient configuration inconsistency > * Warm-up your application before bringing it online (serving requests)
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
In order to experience Azure Arc-enabled data services, you'll need to complete
The details in this article will guide your plan. 1. [Install client tools](install-client-tools.md).+
+1. Register the Microsoft.AzureArcData provider for the subscription where the Azure Arc-enabled data services will be deployed, as follows:
+ ```console
+ az provider register --namespace Microsoft.AzureArcData
+ ```
+ 1. Access a Kubernetes cluster. For demonstration, testing, and validation purposes, you can use an Azure Kubernetes Service cluster. To create a cluster, follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process.
Verify that:
kubectl cluster-info ``` - You have an Azure subscription that resources such as an Azure Arc data controller, Azure Arc-enabled SQL managed instance, or Azure Arc-enabled PostgreSQL server will be projected and billed to.
+- The Microsoft.AzureArcData provider is registered for the subscription where the Azure Arc-enabled data services will be deployed.
After you're prepared the infrastructure, deploy Azure Arc-enabled data services in the following way: 1. Create an Azure Arc-enabled data controller on one of the validated distributions of a Kubernetes cluster.
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. Let validation pass. 1. Select **Create**.
- :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal.":::
-
- > [!NOTE]
- > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters.
## Configure on-premises DNS forwarding
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
Title: Perform ongoing administration for Arc-enabled VMware vSphere description: Learn how to perform day 2 administrator operations related to Azure Arc-enabled VMware vSphere Previously updated : 08/25/2022 Last updated : 09/15/2022
There are two different sets of credentials stored on the Arc resource bridge. Y
- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade. - **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere
-To update the credentials of the account for Arc resource bridge, use the Azure CLI command [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). Run the command from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
+To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands . Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
```azurecli
-az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig>
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance update-infracredentials vmware --kubeconfig kubeconfig
```
+For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance/get-credentials#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware).
+ To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed.
az connectedvmware vcenter connect --custom-location <name of the custom locatio
For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command.
-The `az arcappliance log` command must be run from a workstation that can communicate with the Arc resource bridge either via the cluster configuration IP address or the IP address of the Arc resource bridge VM.
-
-To save the logs to a destination folder, run the following command. This command requires connectivity to cluster configuration IP address.
+To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address.
```azurecli
-az arcappliance logs <provider> --kubeconfig <path to kubeconfig> --out-dir <path to specified output directory>
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory>
```
-If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following command. This command requires connectivity to IP address of the Azure Arc resource bridge VM via SSH
+If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH
```azurecli
-az arcappliance logs <provider> --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
```
-During initial onboarding, SSH keys are saved to the workstation. If you're running this command from the workstation that was used for onboarding, no other steps are required.
-
-If you're running this command from a different workstation, make sure the following files are copied to the new workstation in the same location.
--- For a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
-
-- For a Linux workstation, `$HOME\.KVA\.ssh\logkey` and `$HOME\.KVA\.ssh\logkey.pub`- ## Next steps - [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere (preview)? description: Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 11/10/2021 Last updated : 09/15/2022
To deliver this experience, you need to deploy the [Azure Arc resource bridge](.
Azure Arc-enabled VMware vSphere (preview) works with VMware vSphere version 6.7 and 7. > [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 2500 VMs. If your vCenter has more than 2500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
## Supported scenarios
You can use Azure Arc-enabled VMware vSphere (preview) in these supported region
- West Europe
+- Australia East
+
+- Canada Central
+ ## Next steps - [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Title: Connect VMware vCenter Server to Azure Arc by using the helper script
description: In this quickstart, you'll learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Previously updated : 11/10/2021 Last updated : 09/05/2022 # Customer intent: As a VI admin, I want to connect my vCenter Server instance to Azure to enable self-service through Azure Arc.
First, the script deploys a virtual appliance called [Azure Arc resource bridge
- A datastore with a minimum of 100 GB of free disk space available through the resource pool or cluster. > [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 2,500 virtual machines (VMs). If your vCenter Server instance has more than 2,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 9,500 virtual machines (VMs). If your vCenter Server instance has more than 9,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point.
### vSphere account
Use the following instructions to run the script, depending on which operating s
1. Open a PowerShell window as an Administrator and go to the folder where you've downloaded the PowerShell script.
-> [!NOTE]
-> On Windows workstations, the script must be run in PowerShell window and not in PowerShell Integrated Script Editor (ISE) as PowerShell ISE doesn't display the input prompts from Azure CLI commands. If the script is run on PowerShell ISE, it could appear as though the script is stuck while it is waiting for input.
+ > [!NOTE]
+ > On Windows workstations, the script must be run in PowerShell window and not in PowerShell Integrated Script Editor (ISE) as PowerShell ISE doesn't display the input prompts from Azure CLI commands. If the script is run on PowerShell ISE, it could appear as though the script is stuck while it is waiting for input.
2. Run the following command to allow the script to run, because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.)
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
After the command finishes running, your setup is complete. You can now use the capabilities of Azure Arc-enabled VMware vSphere.
-## Save SSH keys and kubeconfig
-
-> [!IMPORTANT]
-> Performing [day 2 operations on the Arc resource bridge](day2-operations-resource-bridge.md) will require the SSH key to the resource bridge VM and kubeconfig to the Kubernetes cluster on it. It is important to store them to a secure location as it is not possible to retrieve them if the workstation used for the onboarding is deleted.
-
-You will find the kubeconfig file with the name `kubeconfig` in the folder where the onboarding script is downloaded and run.
-
-The SSH key pair will be available in the following location.
--- If you used a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
-
-- If you used a Linux workstation, `$HOME\.KVA\.ssh\logkey` and `$HOME\.KVA\.ssh\logkey.pub`- ## Next steps - [Browse and enable VMware vCenter resources in Azure](browse-and-enable-vcenter-resources-in-azure.md)
azure-functions Functions Create Serverless Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-serverless-api.md
# Customize an HTTP endpoint in Azure Functions
-In this article, you learn how Azure Functions allows you to build highly scalable APIs. Azure Functions comes with a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in a variety of languages, including Node.js, C#, and more. In this article, you'll customize an HTTP trigger to handle specific actions in your API design. You'll also prepare for growing your API by integrating it with Azure Functions Proxies and setting up mock APIs. These tasks are accomplished on top of the Functions serverless compute environment, so you don't have to worry about scaling resources - you can just focus on your API logic.
+In this article, you learn how Azure Functions allows you to build highly scalable APIs. Azure Functions comes with a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in various languages, including Node.js, C#, and more. In this article, you'll customize an HTTP trigger to handle specific actions in your API design. You'll also prepare for growing your API by integrating it with Azure Functions Proxies and setting up mock APIs. These tasks are accomplished on top of the Functions serverless compute environment, so you don't have to worry about scaling resources - you can just focus on your API logic.
+ ## Prerequisites
In this section, you create a new proxy, which serves as a frontend to your over
### Setting up the frontend environment
-Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you will create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
+Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you'll create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
1. Navigate to your new frontend function app in the portal. 1. Select **Configuration** and choose **Application Settings**.
Next, you'll use a proxy to create a mock API for your solution. This proxy allo
To create this mock API, we'll create a new proxy, this time using the [App Service Editor](https://github.com/projectkudu/kudu/wiki/App-Service-Editor). To get started, navigate to your function app in the portal. Select **Platform features**, and under **Development Tools** find **App Service Editor**. The App Service Editor opens in a new tab.
-Select `proxies.json` in the left navigation. This file stores the configuration for all of your proxies. If you use one of the [Functions deployment methods](./functions-continuous-deployment.md), you maintain this file in source control. To learn more about this file, see [Proxies advanced configuration](./functions-proxies.md#advanced-configuration).
+Select `proxies.json` in the left navigation. This file stores the configuration for all of your proxies. If you use one of the [Functions deployment methods](./functions-continuous-deployment.md), you maintain this file in source control. To learn more about this file, see [Proxies advanced configuration](./legacy-proxies.md#advanced-configuration).
If you've followed along so far, your proxies.json should look like as follows:
Next, you'll add your mock API. Replace your proxies.json file with the followin
} ```
-This code adds a new proxy, `GetUserByName`, without the `backendUri` property. Instead of calling another resource, it modifies the default response from Proxies using a response override. Request and response overrides can also be used in conjunction with a backend URL. This technique is particularly useful when proxying to a legacy system, where you might need to modify headers, query parameters, and so on. To learn more about request and response overrides, see [Modifying requests and responses in Proxies](./functions-proxies.md).
+This code adds a new proxy, `GetUserByName`, without the `backendUri` property. Instead of calling another resource, it modifies the default response from Proxies using a response override. Request and response overrides can also be used with a backend URL. This technique is useful when proxying to a legacy system, where you might need to modify headers, query parameters, and so on. To learn more about request and response overrides, see [Modifying requests and responses in Proxies](./legacy-proxies.md).
Test your mock API by calling the `<YourProxyApp>.azurewebsites.net/api/users/{username}` endpoint using a browser or your favorite REST client. Be sure to replace _{username}_ with a string value representing a username.
The following references may be helpful as you develop your API further:
[Create your first function]: ./functions-get-started.md
-[Working with Azure Functions Proxies]: ./functions-proxies.md
+[Working with Azure Functions Proxies]: ./legacy-proxies.md
azure-functions Functions Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md
Title: Work with proxies in Azure Functions
-description: Overview of how to use Azure Functions Proxies
-
+ Title: Create serverless APIs using Azure Functions
+description: Describes how to use Azure Functions as the basis of a cohesive set of serverless APIs.
Previously updated : 11/08/2021 Last updated : 09/14/2022
-# Work with Azure Functions Proxies
-
-This article explains how to configure and work with Azure Functions Proxies. With this feature, you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
-
-Standard Functions billing applies to proxy executions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
-
-> [!NOTE]
-> Proxies is available in Azure Functions [versions](./functions-versions.md) 1.x to 3.x.
->
-> You should also consider using [Azure API Management](../api-management/api-management-key-concepts.md) for your application. It provides the same capabilities as Functions Proxies as well as other tools for building and maintaining APIs, such as OpenAPI integration, rate limiting, and advanced policies.
-
-## <a name="create"></a>Create a proxy
-
-This section shows you how to create a proxy in the Functions portal.
-
-> [!NOTE]
-> Not all languages and operating system combinations support in-portal editing. If you're unable to create a proxy in the portal, you can instead manually create a _proxies.json_ file in the root of your function app project folder. To learn more about portal editing support, see [Language support details](functions-create-function-app-portal.md#language-support-details).
-
-1. Open the [Azure portal], and then go to your function app.
-2. In the left pane, select **Proxies** and then select **+Add**.
-3. Provide a name for your proxy.
-4. Configure the endpoint that's exposed on this function app by specifying the **route template** and **HTTP methods**. These parameters behave according to the rules for [HTTP triggers].
-5. Set the **backend URL** to another endpoint. This endpoint could be a function in another function app, or it could be any other API. The value does not need to be static, and it can reference [application settings] and [parameters from the original client request].
-6. Click **Create**.
-
-Your proxy now exists as a new endpoint on your function app. From a client perspective, it is equivalent to an HttpTrigger in Azure Functions. You can try out your new proxy by copying the Proxy URL and testing it with your favorite HTTP client.
-
-## <a name="modify-requests-responses"></a>Modify requests and responses
-
-With Azure Functions Proxies, you can modify requests to and responses from the back-end. These transformations can use variables as defined in [Use variables].
-
-### <a name="modify-backend-request"></a>Modify the back-end request
-
-By default, the back-end request is initialized as a copy of the original request. In addition to setting the back-end URL, you can make changes to the HTTP method, headers, and query string parameters. The modified values can reference [application settings] and [parameters from the original client request].
-
-Back-end requests can be modified in the portal by expanding the *request override* section of the proxy detail page.
-
-### <a name="modify-response"></a>Modify the response
-
-By default, the client response is initialized as a copy of the back-end response. You can make changes to the response's status code, reason phrase, headers, and body. The modified values can reference [application settings], [parameters from the original client request], and [parameters from the back-end response].
-
-Back-end responses can be modified in the portal by expanding the *response override* section of the proxy detail page.
-
-## <a name="using-variables"></a>Use variables
-
-The configuration for a proxy does not need to be static. You can condition it to use variables from the original client request, the back-end response, or application settings.
-
-### <a name="reference-localhost"></a>Reference local functions
-You can use `localhost` to reference a function inside the same function app directly, without a roundtrip proxy request.
-
-`"backendUri": "https://localhost/api/httptriggerC#1"` will reference a local HTTP triggered function at the route `/api/httptriggerC#1`
-
-
->[!Note]
->If your function uses *function, admin or sys* authorization levels, you will need to provide the code and clientId, as per the original function URL. In this case the reference would look like: `"backendUri": "https://localhost/api/httptriggerC#1?code=<keyvalue>&clientId=<keyname>"` We recommend storing these keys in [application settings] and referencing those in your proxies. This avoids storing secrets in your source code.
-
-### <a name="request-parameters"></a>Reference request parameters
-
-You can use request parameters as inputs to the back-end URL property or as part of modifying requests and responses. Some parameters can be bound from the route template that's specified in the base proxy configuration, and others can come from properties of the incoming request.
-
-#### Route template parameters
-Parameters that are used in the route template are available to be referenced by name. The parameter names are enclosed in braces ({}).
-
-For example, if a proxy has a route template, such as `/pets/{petId}`, the back-end URL can include the value of `{petId}`, as in `https://<AnotherApp>.azurewebsites.net/api/pets/{petId}`. If the route template terminates in a wildcard, such as `/api/{*restOfPath}`, the value `{restOfPath}` is a string representation of the remaining path segments from the incoming request.
-
-#### Additional request parameters
-In addition to the route template parameters, the following values can be used in config values:
-
-* **{request.method}**: The HTTP method that's used on the original request.
-* **{request.headers.\<HeaderName\>}**: A header that can be read from the original request. Replace *\<HeaderName\>* with the name of the header that you want to read. If the header is not included on the request, the value will be the empty string.
-* **{request.querystring.\<ParameterName\>}**: A query string parameter that can be read from the original request. Replace *\<ParameterName\>* with the name of the parameter that you want to read. If the parameter is not included on the request, the value will be the empty string.
-
-### <a name="response-parameters"></a>Reference back-end response parameters
-
-Response parameters can be used as part of modifying the response to the client. The following values can be used in config values:
-
-* **{backend.response.statusCode}**: The HTTP status code that's returned on the back-end response.
-* **{backend.response.statusReason}**: The HTTP reason phrase that's returned on the back-end response.
-* **{backend.response.headers.\<HeaderName\>}**: A header that can be read from the back-end response. Replace *\<HeaderName\>* with the name of the header you want to read. If the header is not included on the response, the value will be the empty string.
-
-### <a name="use-appsettings"></a>Reference application settings
-
-You can also reference [application settings defined for the function app](./functions-how-to-use-azure-function-app-settings.md) by surrounding the setting name with percent signs (%).
-
-For example, a back-end URL of *https://%ORDER_PROCESSING_HOST%/api/orders* would have "%ORDER_PROCESSING_HOST%" replaced with the value of the ORDER_PROCESSING_HOST setting.
-
-> [!TIP]
-> Use application settings for back-end hosts when you have multiple deployments or test environments. That way, you can make sure that you are always talking to the right back-end for that environment.
+# Serverless REST APIs using Azure Functions
-## <a name="debugProxies"></a>Troubleshoot Proxies
+Azure Functions is an essential compute service that you use to build serverless REST-based APIs. HTTP triggers expose REST endpoints that can be called by your clients, like browsers, mobile apps, and other backend services. With [native support for routes](functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint), a single HTTP triggered function can expose a highly functional REST API. Functions also provides its own basic key-based authorization scheme to help limit access only to specific clients. For more information, see [Azure Functions HTTP trigger](functions-bindings-http-webhook-trigger.md)
-By adding the flag `"debug":true` to any proxy in your `proxies.json` you will enable debug logging. Logs are stored in `D:\home\LogFiles\Application\Proxies\DetailedTrace` and accessible through the advanced tools (kudu). Any HTTP responses will also contain a `Proxy-Trace-Location` header with a URL to access the log file.
+In some scenarios, you may need your API to support a more complex set of REST behaviors. For example, you may need to combine multiple HTTP function endpoints into a single API. You might also want to pass requests through to one or more backend REST-based services. Finally, your APIs might require a higher-degree of security that lets you monetize its use.
-You can debug a proxy from the client side by adding a `Proxy-Trace-Enabled` header set to `true`. This will also log a trace to the file system, and return the trace URL as a header in the response.
+Today, the recommended approach to build more complex and robust APIs based on your functions is to use the comprehensive API services provided by [Azure API Management](../api-management/api-management-key-concepts.md).
+API Management uses a policy-based model to let you control routing, security, and OpenAPI integration. It also supports advanced policies like rate limiting monetization. Previous versions of the Functions runtime used the legacy Functions Proxies feature.
-### Block proxy traces
-For security reasons you may not want to allow anyone calling your service to generate a trace. They will not be able to access the trace contents without your login credentials, but generating the trace consumes resources and exposes that you are using Function Proxies.
+## <a name="migration"></a>Moving from Functions Proxies to API Management
-Disable traces altogether by adding `"debug":false` to any particular proxy in your `proxies.json`.
+When moving from Functions Proxies to using API Management, you must integrate your function app with an API Management instance, and then configure the API Management instance to behave like the previous proxy. The following section provides links to the relevant articles that help you succeed in using API Management with Azure Functions.
-## Advanced configuration
+If you have challenges moving from Proxies or if Azure API Management doesn't address your specific scenarios, create an issue in the [Azure Functions repository](https://github.com/Azure/Azure-Functions). Make sure to tag the issue with the label `proxy-deprecation`.
-The proxies that you configure are stored in a *proxies.json* file, which is located in the root of a function app directory. You can manually edit this file and deploy it as part of your app when you use any of the [deployment methods](./functions-continuous-deployment.md) that Functions supports.
+## API Management integration
-> [!TIP]
-> If you have not set up one of the deployment methods, you can also work with the *proxies.json* file in the portal. Go to your function app, select **Platform features**, and then select **App Service Editor**. By doing so, you can view the entire file structure of your function app and then make changes.
+API Management lets you import an existing function app. After import, each HTTP triggered function endpoint becomes an API that you can modify and manage. After import, you can also use API Management to generate an OpenAPI definition file for your APIs. During import, any endpoints with an `admin` [authorization level](functions-bindings-http-webhook-trigger.md#http-auth) are ignored. For more information about using API Management with Functions, see the following articles:
-*Proxies.json* is defined by a proxies object, which is composed of named proxies and their definitions. Optionally, if your editor supports it, you can reference a [JSON schema](http://json.schemastore.org/proxies) for code completion. An example file might look like the following:
+| Article | Description |
+| | |
+| [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md) | Shows how to create a new API Management instance from an existing function app in the Azure portal. Supports all languages. |
+| [Create serverless APIs in Visual Studio using Azure Functions and API Management integration](openapi-apim-integrate-visual-studio.md) | Shows how to use Visual Studio to create a C# project that uses the [OpenAPI extension](https://github.com/Azure/azure-functions-openapi-extension). The OpenAPI extension lets you define your .NET APIs by applying attributes directly to your C# code. |
+| [Quickstart: Create a new Azure API Management service instance by using the Azure portal](../api-management/get-started-create-service-instance.md) | Create a new API Management instance in the portal. After you create an API Management instance, you can connect it to your function app. Other non-portal creation methods are supported. |
+| [Import an Azure function app as an API in Azure API Management](../api-management/import-function-app-as-api.md) | Shows how to import an existing function app to expose existing HTTP trigger endpoints as a managed API. This article supports both creating a new API and adding the endpoints to an existing managed API. |
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "proxy1": {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/{test}"
- },
- "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
- }
- }
-}
-```
+After you have your function app endpoints exposed by using API Management, the following articles provide general information about how to manage your Functions-based APIs in the API Management instance.
-Each proxy has a friendly name, such as *proxy1* in the preceding example. The corresponding proxy definition object is defined by the following properties:
+| Article | Description |
+| | |
+| [Edit an API](../api-management/edit-api.md) | Shows you how to work with an existing API hosted in API Management. |
+| [Policies in Azure API Management](../api-management/api-management-howto-policies.md) | In API Management, publishers can change API behavior through configuration using policies. Policies are a collection of statements that are run sequentially on the request or response of an API. |
+| [API Management policy reference](../api-management/api-management-policies.md) | Reference that details all supported API Management policies. |
+| [API Management policy samples](/azure/api-management/policies/) | Helpful collection of samples using API Management policies in key scenarios. |
-* **matchCondition**: Required--an object defining the requests that trigger the execution of this proxy. It contains two properties that are shared with [HTTP triggers]:
- * _methods_: An array of the HTTP methods that the proxy responds to. If it is not specified, the proxy responds to all HTTP methods on the route.
- * _route_: Required--defines the route template, controlling which request URLs your proxy responds to. Unlike in HTTP triggers, there is no default value.
-* **backendUri**: The URL of the back-end resource to which the request should be proxied. This value can reference application settings and parameters from the original client request. If this property is not included, Azure Functions responds with an HTTP 200 OK.
-* **requestOverrides**: An object that defines transformations to the back-end request. See [Define a requestOverrides object].
-* **responseOverrides**: An object that defines transformations to the client response. See [Define a responseOverrides object].
+## Legacy Functions Proxies
-> [!NOTE]
-> The *route* property in Azure Functions Proxies does not honor the *routePrefix* property of the Function App host configuration. If you want to include a prefix such as `/api`, it must be included in the *route* property.
+The legacy [Functions Proxies feature](legacy-proxies.md) also provides a set of basic API functionality for version 3.x and older version of the Functions runtime.
-### <a name="disableProxies"></a> Disable individual proxies
-You can disable individual proxies by adding `"disabled": true` to the proxy in the `proxies.json` file. This will cause any requests meeting the matchCondition to return 404.
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "Root": {
- "disabled":true,
- "matchCondition": {
- "route": "/example"
- },
- "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
- }
- }
-}
-```
+Some basic hints for how to perform equivalent tasks using API Management have been added to the [Functions Proxies article](legacy-proxies.md). We don't currently have documentation or tools to help you migrate an existing Functions Proxies implementation to API Management.
-### <a name="applicationSettings"></a> Application Settings
+## Next steps
-The proxy behavior can be controlled by several app settings. They are all outlined in the [Functions App Settings reference](./functions-app-settings.md)
-
-* [AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL](./functions-app-settings.md#azure_function_proxy_disable_local_call)
-* [AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES](./functions-app-settings.md#azure_function_proxy_backend_url_decode_slashes)
-
-### <a name="reservedChars"></a> Reserved Characters (string formatting)
-
-Proxies read all strings out of a JSON file, using \ as an escape symbol. Proxies also interpret curly braces. See a full set of examples below.
-
-|Character|Escaped Character|Example|
-|-|-|-|
-|{ or }|{{ or }}|`{{ example }}` --> `{ example }`
-| \ | \\\\ | `example.com\\text.html` --> `example.com\text.html`
-|"|\\\"| `\"example\"` --> `"example"`
-
-### <a name="requestOverrides"></a>Define a requestOverrides object
-
-The requestOverrides object defines changes made to the request when the back-end resource is called. The object is defined by the following properties:
-
-* **backend.request.method**: The HTTP method that's used to call the back-end.
-* **backend.request.querystring.\<ParameterName\>**: A query string parameter that can be set for the call to the back-end. Replace *\<ParameterName\>* with the name of the parameter that you want to set. Note that if an empty string is provided, the parameter is still included on the back-end request.
-* **backend.request.headers.\<HeaderName\>**: A header that can be set for the call to the back-end. Replace *\<HeaderName\>* with the name of the header that you want to set. Note that if an empty string is provided, the parameter is still included on the back-end request.
-
-Values can reference application settings and parameters from the original client request.
-
-An example configuration might look like the following:
-
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "proxy1": {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/{test}"
- },
- "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>",
- "requestOverrides": {
- "backend.request.headers.Accept": "application/xml",
- "backend.request.headers.x-functions-key": "%ANOTHERAPP_API_KEY%"
- }
- }
- }
-}
-```
-
-### <a name="responseOverrides"></a>Define a responseOverrides object
-
-The requestOverrides object defines changes that are made to the response that's passed back to the client. The object is defined by the following properties:
-
-* **response.statusCode**: The HTTP status code to be returned to the client.
-* **response.statusReason**: The HTTP reason phrase to be returned to the client.
-* **response.body**: The string representation of the body to be returned to the client.
-* **response.headers.\<HeaderName\>**: A header that can be set for the response to the client. Replace *\<HeaderName\>* with the name of the header that you want to set. If you provide the empty string, the header is not included on the response.
-
-Values can reference application settings, parameters from the original client request, and parameters from the back-end response.
-
-An example configuration might look like the following:
-
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "proxy1": {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/{test}"
- },
- "responseOverrides": {
- "response.body": "Hello, {test}",
- "response.headers.Content-Type": "text/plain"
- }
- }
- }
-}
-```
-> [!NOTE]
-> In this example, the response body is set directly, so no `backendUri` property is needed. The example shows how you might use Azure Functions Proxies for mocking APIs.
-
-[Azure portal]: https://portal.azure.com
-[HTTP triggers]: ./functions-bindings-http-webhook.md
-[Modify the back-end request]: #modify-backend-request
-[Modify the response]: #modify-response
-[Define a requestOverrides object]: #requestOverrides
-[Define a responseOverrides object]: #responseOverrides
-[application settings]: #use-appsettings
-[Use variables]: #using-variables
-[parameters from the original client request]: #request-parameters
-[parameters from the back-end response]: #response-parameters
+> [!div class="nextstepaction"]
+> [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md)
azure-functions Legacy Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/legacy-proxies.md
+
+ Title: Work with legacy Azure Functions Proxies
+description: Overview of how to use the legacy Proxies feature in Azure Functions
+ Last updated : 09/14/2022 ++
+# Work with legacy proxies
+
+> To help make it easier to migrate from existing proxy implemetations, this article links to equivalent API Management content, when available.
+
+This article explains how to configure and work with Azure Functions Proxies. With this feature, you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
+
+Standard Functions billing applies to proxy executions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
+
+## <a name="create"></a>Create a proxy
+
+> [!IMPORTANT]
+> For equivalent content using API Management, see [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md).
+
+Proxies are defined in the _proxies.json_ file in the root of your function app. The steps in this section show you how to use the Azure portal to create this file in your function app. Not all languages and operating system combinations support in-portal editing. If you can't modify your function app files in the portal, you can instead create and deploy the equivalent `proxies.json` file from the root of your local project folder. To learn more about portal editing support, see [Language support details](functions-create-function-app-portal.md#language-support-details).
+
+1. Open the [Azure portal], and then go to your function app.
+1. In the left pane, select **Proxies** and then select **+Add**.
+1. Provide a name for your proxy.
+1. Configure the endpoint that's exposed on this function app by specifying the **route template** and **HTTP methods**. These parameters behave according to the rules for [HTTP triggers].
+1. Set the **backend URL** to another endpoint. This endpoint could be a function in another function app, or it could be any other API. The value doesn't need to be static, and it can reference [application settings] and [parameters from the original client request].
+1. Select **Create**.
+
+Your proxy now exists as a new endpoint on your function app. From a client perspective, it's the same as an HttpTrigger in Functions. You can try out your new proxy by copying the **Proxy URL** and testing it with your favorite HTTP client.
+
+## <a name="modify-requests-responses"></a>Modify requests and responses
+
+> [!IMPORTANT]
+> API Management lets you can change API behavior through configuration using policies. Policies are a collection of statements that are run sequentially on the request or response of an API. For more information about API Management policies, see [Policies in Azure API Management](../api-management/api-management-howto-policies.md).
+
+With proxies, you can modify requests to and responses from the back-end. These transformations can use variables as defined in [Use variables].
+
+### <a name="modify-backend-request"></a>Modify the back-end request
+
+By default, the back-end request is initialized as a copy of the original request. In addition to setting the back-end URL, you can make changes to the HTTP method, headers, and query string parameters. The modified values can reference [application settings] and [parameters from the original client request].
+
+Back-end requests can be modified in the portal by expanding the *request override* section of the proxy detail page.
+
+### <a name="modify-response"></a>Modify the response
+
+By default, the client response is initialized as a copy of the back-end response. You can make changes to the response's status code, reason phrase, headers, and body. The modified values can reference [application settings], [parameters from the original client request], and [parameters from the back-end response].
+
+Back-end responses can be modified in the portal by expanding the *response override* section of the proxy detail page.
+
+## <a name="using-variables"></a>Use variables
+
+The configuration for a proxy doesn't need to be static. You can condition it to use variables from the original client request, the back-end response, or application settings.
+
+### <a name="reference-localhost"></a>Reference local functions
+You can use `localhost` to reference a function inside the same function app directly, without a roundtrip proxy request.
+
+`"backendUri": "https://localhost/api/httptriggerC#1"` will reference a local HTTP triggered function at the route `/api/httptriggerC#1`
+
+>[!Note]
+>If your function uses *function, admin or sys* authorization levels, you will need to provide the code and clientId, as per the original function URL. In this case the reference would look like: `"backendUri": "https://localhost/api/httptriggerC#1?code=<keyvalue>&clientId=<keyname>"` We recommend storing these keys in [application settings] and referencing those in your proxies. This avoids storing secrets in your source code.
+
+### <a name="request-parameters"></a>Reference request parameters
+
+You can use request parameters as inputs to the back-end URL property or as part of modifying requests and responses. Some parameters can be bound from the route template that's specified in the base proxy configuration, and others can come from properties of the incoming request.
+
+#### Route template parameters
+Parameters that are used in the route template are available to be referenced by name. The parameter names are enclosed in braces ({}).
+
+For example, if a proxy has a route template, such as `/pets/{petId}`, the back-end URL can include the value of `{petId}`, as in `https://<AnotherApp>.azurewebsites.net/api/pets/{petId}`. If the route template terminates in a wildcard, such as `/api/{*restOfPath}`, the value `{restOfPath}` is a string representation of the remaining path segments from the incoming request.
+
+#### Additional request parameters
+In addition to the route template parameters, the following values can be used in config values:
+
+* **{request.method}**: The HTTP method that's used on the original request.
+* **{request.headers.\<HeaderName\>}**: A header that can be read from the original request. Replace *\<HeaderName\>* with the name of the header that you want to read. If the header isn't included on the request, the value will be the empty string.
+* **{request.querystring.\<ParameterName\>}**: A query string parameter that can be read from the original request. Replace *\<ParameterName\>* with the name of the parameter that you want to read. If the parameter isn't included on the request, the value will be the empty string.
+
+### <a name="response-parameters"></a>Reference back-end response parameters
+
+Response parameters can be used as part of modifying the response to the client. The following values can be used in config values:
+
+* **{backend.response.statusCode}**: The HTTP status code that's returned on the back-end response.
+* **{backend.response.statusReason}**: The HTTP reason phrase that's returned on the back-end response.
+* **{backend.response.headers.\<HeaderName\>}**: A header that can be read from the back-end response. Replace *\<HeaderName\>* with the name of the header you want to read. If the header isn't included on the response, the value will be the empty string.
+
+### <a name="use-appsettings"></a>Reference application settings
+
+You can also reference [application settings defined for the function app](./functions-how-to-use-azure-function-app-settings.md) by surrounding the setting name with percent signs (%).
+
+For example, a back-end URL of *https://%ORDER_PROCESSING_HOST%/api/orders* would have "%ORDER_PROCESSING_HOST%" replaced with the value of the ORDER_PROCESSING_HOST setting.
+
+> [!TIP]
+> Use application settings for back-end hosts when you have multiple deployments or test environments. That way, you can make sure that you are always talking to the right back-end for that environment.
+
+## <a name="debugProxies"></a>Troubleshoot Proxies
+
+By adding the flag `"debug":true` to any proxy in your `proxies.json`, you'll enable debug logging. Logs are stored in `D:\home\LogFiles\Application\Proxies\DetailedTrace` and accessible through the advanced tools (kudu). Any HTTP responses will also contain a `Proxy-Trace-Location` header with a URL to access the log file.
+
+You can debug a proxy from the client side by adding a `Proxy-Trace-Enabled` header set to `true`. This will also log a trace to the file system, and return the trace URL as a header in the response.
+
+### Block proxy traces
+
+For security reasons you may not want to allow anyone calling your service to generate a trace. They won't be able to access the trace contents without your sign-in credentials, but generating the trace consumes resources and exposes that you're using Function Proxies.
+
+Disable traces altogether by adding `"debug":false` to any particular proxy in your `proxies.json`.
+
+## Advanced configuration
+
+The proxies that you configure are stored in a *proxies.json* file, which is located in the root of a function app directory. You can manually edit this file and deploy it as part of your app when you use any of the [deployment methods](./functions-continuous-deployment.md) that Functions supports.
+
+> [!TIP]
+> If you have not set up one of the deployment methods, you can also work with the *proxies.json* file in the portal. Go to your function app, select **Platform features**, and then select **App Service Editor**. By doing so, you can view the entire file structure of your function app and then make changes.
+
+*Proxies.json* is defined by a proxies object, which is composed of named proxies and their definitions. Optionally, if your editor supports it, you can reference a [JSON schema](http://json.schemastore.org/proxies) for code completion. An example file might look like the following:
+
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "proxy1": {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/{test}"
+ },
+ "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
+ }
+ }
+}
+```
+
+Each proxy has a friendly name, such as *proxy1* in the preceding example. The corresponding proxy definition object is defined by the following properties:
+
+* **matchCondition**: Required--an object defining the requests that trigger the execution of this proxy. It contains two properties that are shared with [HTTP triggers]:
+ * _methods_: An array of the HTTP methods that the proxy responds to. If it isn't specified, the proxy responds to all HTTP methods on the route.
+ * _route_: Required--defines the route template, controlling which request URLs your proxy responds to. Unlike in HTTP triggers, there's no default value.
+* **backendUri**: The URL of the back-end resource to which the request should be proxied. This value can reference application settings and parameters from the original client request. If this property isn't included, Azure Functions responds with an HTTP 200 OK.
+* **requestOverrides**: An object that defines transformations to the back-end request. See [Define a requestOverrides object].
+* **responseOverrides**: An object that defines transformations to the client response. See [Define a responseOverrides object].
+
+> [!NOTE]
+> The *route* property in Azure Functions Proxies does not honor the *routePrefix* property of the Function App host configuration. If you want to include a prefix such as `/api`, it must be included in the *route* property.
+
+### <a name="disableProxies"></a> Disable individual proxies
+
+You can disable individual proxies by adding `"disabled": true` to the proxy in the `proxies.json` file. This will cause any requests meeting the matchCondition to return 404.
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "Root": {
+ "disabled":true,
+ "matchCondition": {
+ "route": "/example"
+ },
+ "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
+ }
+ }
+}
+```
+
+### <a name="applicationSettings"></a> Application Settings
+
+The proxy behavior can be controlled by several app settings. They're all outlined in the [Functions App Settings reference](./functions-app-settings.md)
+
+* [AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL](./functions-app-settings.md#azure_function_proxy_disable_local_call)
+* [AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES](./functions-app-settings.md#azure_function_proxy_backend_url_decode_slashes)
+
+### <a name="reservedChars"></a> Reserved Characters (string formatting)
+
+Proxies read all strings out of a JSON file, using \ as an escape symbol. Proxies also interpret curly braces. See a full set of examples below.
+
+|Character|Escaped Character|Example|
+|-|-|-|
+|{ or }|{{ or }}|`{{ example }}` --> `{ example }`
+| \ | \\\\ | `example.com\\text.html` --> `example.com\text.html`
+|"|\\\"| `\"example\"` --> `"example"`
+
+### <a name="requestOverrides"></a>Define a requestOverrides object
+
+The requestOverrides object defines changes made to the request when the back-end resource is called. The object is defined by the following properties:
+
+* **backend.request.method**: The HTTP method that's used to call the back-end.
+* **backend.request.querystring.\<ParameterName\>**: A query string parameter that can be set for the call to the back-end. Replace *\<ParameterName\>* with the name of the parameter that you want to set. If an empty string is provided, the parameter is still included on the back-end request.
+* **backend.request.headers.\<HeaderName\>**: A header that can be set for the call to the back-end. Replace *\<HeaderName\>* with the name of the header that you want to set. If an empty string is provided, the parameter is still included on the back-end request.
+
+Values can reference application settings and parameters from the original client request.
+
+An example configuration might look like the following:
+
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "proxy1": {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/{test}"
+ },
+ "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>",
+ "requestOverrides": {
+ "backend.request.headers.Accept": "application/xml",
+ "backend.request.headers.x-functions-key": "%ANOTHERAPP_API_KEY%"
+ }
+ }
+ }
+}
+```
+
+### <a name="responseOverrides"></a>Define a responseOverrides object
+
+The requestOverrides object defines changes that are made to the response that's passed back to the client. The object is defined by the following properties:
+
+* **response.statusCode**: The HTTP status code to be returned to the client.
+* **response.statusReason**: The HTTP reason phrase to be returned to the client.
+* **response.body**: The string representation of the body to be returned to the client.
+* **response.headers.\<HeaderName\>**: A header that can be set for the response to the client. Replace *\<HeaderName\>* with the name of the header that you want to set. If you provide the empty string, the header isn't included on the response.
+
+Values can reference application settings, parameters from the original client request, and parameters from the back-end response.
+
+An example configuration might look like the following:
+
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "proxy1": {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/{test}"
+ },
+ "responseOverrides": {
+ "response.body": "Hello, {test}",
+ "response.headers.Content-Type": "text/plain"
+ }
+ }
+ }
+}
+```
+> [!NOTE]
+> In this example, the response body is set directly, so no `backendUri` property is needed. The example shows how you might use Azure Functions Proxies for mocking APIs.
+
+[Azure portal]: https://portal.azure.com
+[HTTP triggers]: ./functions-bindings-http-webhook.md
+[Modify the back-end request]: #modify-backend-request
+[Modify the response]: #modify-response
+[Define a requestOverrides object]: #requestOverrides
+[Define a responseOverrides object]: #responseOverrides
+[application settings]: #use-appsettings
+[Use variables]: #using-variables
+[parameters from the original client request]: #request-parameters
+[parameters from the back-end response]: #response-parameters
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/how-provisioning-works.md)| &#x2705; | &#x2705; | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
+| [Azure App Service](../../app-service/index.yml) | &#x2705; | &#x2705; |
| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | | [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; |
-| [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; |
| [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; | **&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting your Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure App Service](../../app-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Information Protection](/azure/information-protection/) **&ast;&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Monitor](../../azure-monitor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| Azure Monitor [Application Insights](../../azure-monitor/app/app-insights-overview.md) | | | | | &#x2705; |
-| Azure Monitor [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
**&ast;** Authorizations for edge devices (such as Azure Data Box and Azure Stack Edge) apply only to Azure services that support on-premises, customer-managed devices. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Previously updated : 08/04/2022 Last updated : 9/14/2022
-# Customer intent: As an IT manager, I want to understand if and when I should move from using legacy agents to Azure Monitor Agent.
+# Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines Azure Monitor and introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in Azure and on premises. It introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
> [!IMPORTANT] > The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
Azure Monitor Agent provides the following benefits over legacy agents:
Your migration plan to the Azure Monitor Agent should take into account: -- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to benefit from other important features in the new agent. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to discover what solutions and features you're using the legacy agent for.
+- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to **discover what solutions and features you're using today that depend on the legacy agent**.
If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. -- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a new environment with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
+- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a **new environment** with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
- Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. While this allows you to begin the transition given the limitations:
+ Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. While this allows you to begin the transition, ensure you understand the limitations:
- Be careful in collecting duplicate data from the same machine, which could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
-
If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are **collecting data from different machines** or **sending the data to different destinations**. Collecting duplicate data also generates more charges for data ingestion and retention. - Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth.
+## Prerequisites
+Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use Azure Monitor Agent. For on-premises servers or other cloud managed servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Arc for this purpose comes at no added cost, and it's not mandatory to use Arc for server management overall (i.e. you can continue using your existing on premises management solutions). Once Arc agent is installed, you can follow the same guidance below across Azure and on-premise for migration.
+ ## Migration testing To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
-See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. After you validate that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
+See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. Alternatively you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to convert existing legacy agent configuration into data collection rules.
+After you **validate** that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
## At-scale migration using Azure Policy We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to find sources, such as virtual machines, virtual machine scale sets, and on-premises servers.
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-email-notification.md
# Smart Detection e-mail notification change >[!NOTE]
->You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
> > See [Smart Detection Alerts migration](./alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration.
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
You need a subscription with [Microsoft Azure](https://azure.com). Sign in with a Microsoft account, which you might have for Windows, Xbox Live, or other Microsoft cloud services. Your team might have an organizational subscription to Azure: ask the owner to add you to it using your Microsoft account. > [!NOTE]
-> It is *highly recommended* to use the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package is compatible with [Long Term Support (LTS) versions](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core and .NET Framework or higher.
+> It is *highly recommended* to use the newer [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package is compatible with [Long Term Support (LTS) versions](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core and .NET Framework or higher.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
You may initialize and configure Application Insights from the code or using `Ap
> [!NOTE] > Instructions referring to **ApplicationInsights.config** are only applicable to apps that are targeting the .NET Framework, and do not apply to .NET Core applications.
-### Using config file
+### Using config file
-By default, Application Insights SDK looks for `ApplicationInsights.config` file in the working directory when `TelemetryConfiguration` is being created
+For .NET Framework based application, by default, Application Insights SDK looks for `ApplicationInsights.config` file in the working directory when `TelemetryConfiguration` is being created. Reading config file is not supported on .NET Core.
```csharp TelemetryConfiguration config = TelemetryConfiguration.Active; // Reads ApplicationInsights.config file if present
You may get a full example of the config file by installing latest version of [M
### Configuring telemetry collection from code > [!NOTE]
-> Reading config file is not supported on .NET Core. You may consider using [Application Insights SDK for ASP.NET Core](./asp-net-core.md)
+> Reading config file is not supported on .NET Core.
* During application start-up, create and configure `DependencyTrackingTelemetryModule` instance - it must be singleton and must be preserved for application lifetime.
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Title: Best practices for autoscale
-description: Autoscale patterns in Azure for Web Apps, Virtual Machine Scale sets, and Cloud Services
+description: Autoscale patterns in Azure for Web Apps, virtual machine scale sets, and Cloud Services
Previously updated : 04/22/2022 Last updated : 09/13/2022
In this example, you can have a situation in which the memory usage is over 90%
### Choose the appropriate statistic for your diagnostics metric For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*.
-### Choose the thresholds carefully for all metric types
-We recommend carefully choosing different thresholds for scale-out and scale-in based on practical situations.
-We *do not recommend* autoscale settings like the examples below with the same or similar threshold values for out and in conditions:
-
-* Increase instances by 1 count when Thread Count >= 600
-* Decrease instances by 1 count when Thread Count <= 600
-
-Let's look at an example of what can lead to a behavior that may seem confusing. Consider the following sequence.
-
-1. Assume there are two instances to begin with and then the average number of threads per instance grows to 625.
-2. Autoscale scales out adding a third instance.
-3. Next, assume that the average thread count across instance falls to 575.
-4. Before scaling down, autoscale tries to estimate what the final state will be if it scaled in. For example, 575 x 3 (current instance count) = 1,725 / 2 (final number of instances when scaled down) = 862.5 threads. This means autoscale would have to immediately scale out again even after it scaled in, if the average thread count remains the same or even falls only a small amount. However, if it scaled up again, the whole process would repeat, leading to an infinite loop.
-5. To avoid this situation (termed "flapping"), autoscale does not scale down at all. Instead, it skips and reevaluates the condition again the next time the service's job executes. The flapping state can confuse many people because autoscale wouldn't appear to work when the average thread count was 575.
-
-Estimation during a scale-in is intended to avoid "flapping" situations, where scale-in and scale-out actions continually go back and forth. Keep this behavior in mind when you choose the same thresholds for scale-out and in.
-
-We recommend choosing an adequate margin between the scale-out and in thresholds. As an example, consider the following better rule combination.
-
-* Increase instances by 1 count when CPU% >= 80
-* Decrease instances by 1 count when CPU% <= 60
-
-In this case
-
-1. Assume there are 2 instances to start with.
-2. If the average CPU% across instances goes to 80, autoscale scales out adding a third instance.
-3. Now assume that over time the CPU% falls to 60.
-4. Autoscale's scale-in rule estimates the final state if it were to scale-in. For example, 60 x 3 (current instance count) = 180 / 2 (final number of instances when scaled down) = 90. So autoscale does not scale-in because it would have to scale-out again immediately. Instead, it skips scaling down.
-5. The next time autoscale checks, the CPU continues to fall to 50. It estimates again - 50 x 3 instance = 150 / 2 instances = 75, which is below the scale-out threshold of 80, so it scales in successfully to 2 instances.
-
-> [!NOTE]
-> If the autoscale engine detects flapping could occur as a result of scaling to the target number of instances, it will also try to scale to a different number of instances between the current count and the target count. If flapping does not occur within this range, autoscale will continue the scale operation with the new target.
### Considerations for scaling threshold values for special metrics For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absol
## Next Steps
+- [Autoscale flapping](/azure/azure-monitor/autoscale/autoscale-flapping)
- [Create an Activity Log Alert to monitor all autoscale engine operations on your subscription.](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
azure-monitor Autoscale Flapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-flapping.md
+
+ Title: Autoscale flapping
+description: "Flapping in Autoscale"
+++++ Last updated : 09/13/2022++
+#Customer intent: As a cloud administrator, I want understand flapping so that I can configure autoscale correctly.
++
+# Flapping in Autoscale
+
+This article describes flapping in autoscale and how to avoid it.
+
+Flapping refers to a loop condition that causes a series of opposing scale events. Flapping happens when a scale event triggers the opposite scale event.
+
+Autoscale evaluates a pending scale-in action to see if it would cause flapping. In cases where flapping could occur, autoscale may skip the scale action and reevaluate at the next run, or autoscale may scale by less than the specified number of resource instances. The autoscale evaluation process occurs each time the autoscale engine runs, which is every 30 to 60 seconds, depending on the resource type.
+
+To ensure adequate resources, checking for potential flapping doesn't occur for scale-out events. Autoscale will only defer a scale-in event to avoid flapping.
+
+For example, let's assume the following rules:
+
+* Scale out increasing by 1 instance when average CPU usage is above 50%.
+* Scale in decreasing the instance count by 1 instance when average CPU usage is lower than 30%.
+
+ In the table below at T0, when usage is at 56%, a scale-out action is triggered and results in 56% CPU usage across 2 instances. That gives an average of 28% for the scale set. As 28% is less than the scale-in threshold, autoscale should scale back in. Scaling in would return the scale set to 56% CPU usage, which triggers a scale-out action.
+
+|Time| Instance count| CPU% |CPU% per instance| Scale event| Resulting instance count
+|||||||
+T0|1|56%|56%|Scale out|2|
+T1|2|56%|28%|Scale in|1|
+T2|1|56%|56%|Scale out|2|
+T3|2|56%|28%|Scale in|1|
+
+If left uncontrolled, there would be an ongoing series of scale events. However, in this situation, the autoscale engine will defer the scale-in event at *T1* and reevaluate during the next autoscale run. The scale-in will only happen once the average CPU usage is below 30%.
+
+Flapping is often caused by:
+
+* Small or no margins between thresholds
+* Scaling by more than one instance
+* Scaling in and out using different metrics
+
+## Small or no margins between thresholds
+
+To avoid flapping, keep adequate margins between scaling thresholds.
+
+For example, the following rules where there's no margin between thresholds, cause flapping.
+
+* Scale out when thread count >=600
+* Scale in when thread count < 600
++
+The table below shows a potential outcome of these autoscale rules:
+
+|Time| Instance count| Thread count|Thread count per instance| Scale event| Resulting instance count
+|||||||
+T0|2|1250|625|Scale out|3|
+T1|3|1250|417|Scale in|2|
+
+* At time T0, there are two instances handling 1250 threads, or 625 treads per instance. Autoscale scales out to three instances.
+* Following the scale-out, at T1, we have the same 1250 threads, but with three instances, only 417 threads per instance. A scale-in event is triggered.
+* Before scaling-in, autoscale evaluates what would happen if the scale-in event occurs. In this example, 1250 / 2 = 625, that is, 625 threads per instance. Autoscale would have to immediately scale out again after it scaled in. If it scaled out again, the process would repeat, leading to flapping loop.
+* To avoid this situation, autoscale doesn't scale in. Autoscale skips the current scale event and reevaluates the rule in the next execution cycle.
+
+In this case, it looks like autoscale isn't working since no scale event takes place. Check the *Run history* tab on the autoscale setting page to see if there's any flapping.
++
+Setting an adequate margin between thresholds avoids the above scenario. For example,
+
+* Scale out when thread count >=600
+* Scale in when thread count < 400
++
+If the scale-in thread count is 400, the total thread count would have to drop to below 1200 before a scale event would take place. See the table below.
+
+|Time| Instance count| Thread count|Thread count per instance| Scale event| Resulting instance count
+|||||||
+T0|2|1250|625|Scale out|3|
+T1|3|1250|417|no scale event|3|
+T2|3|1180|394|scale in|2|
+T3|3|1180|590|no scale event|2|
+
+## Scaling by more than one instance
+
+To avoid flapping when scaling in or out by more than one instance, autoscale may scale by less than the number of instances specified in the rule.
+
+For example, the following rules can cause flapping:
+
+* Scale out by 20 when the request count >=200 per instance.
+* OR when CPU > 70% per instance.
+* Scale in by 10 when the request count <=50 per instance.
++
+The table below shows a potential outcome of these autoscale rules:
+
+|Time|Number of instances|CPU |Request count| Scale event| Resulting instances|Comments|
+||||||||
+|T0|30|65%|3000, or 100 per instance.|No scale event|30|
+|T1|30|65|1500| Scale in by 3 instances |27|Scaling-in by 10 would cause an estimated CPU rise above 70%, leading to a scale-out event.
+
+At time T0, the app is running with 30 instances, a total request count of 3000, and a CPU usage of 65% per instance.
+
+At T1, when the request count drops to 1500 requests, or 50 requests per instance, autoscale will try to scale in by 10 instances to 20. However, autoscale estimates that the CPU load for 20 instances will be above 70%, causing a scale-out event.
+
+To avoid flapping, the autoscale engine estimates the CPU usage for instance counts above 20 until it finds an instance count where all metrics are with in the defined thresholds:
+
+* Keep the CPU below 70%.
+* Keep the number of requests per instance is above 50.
+* Reduce the number of instances below 30.
+
+In this situation, autoscale may scale in by 3, from 30 to 27 instances in order to satisfy the rules, even though the rule specifies a decrease of 10. A log message is written to the activity log with a description that includes *Scale down will occur with updated instance count to avoid flapping*
+
+If autoscale can't find a suitable number of instances, it will skip the scale in event and reevaluate during the next cycle.
+
+> [!NOTE]
+> If the autoscale engine detects that flapping could occur as a result of scaling to the target number of instances, it will also try to scale to a lower number of instances between the current count and the target count. If flapping does not occur within this range, autoscale will continue the scale operation with the new target.
+
+## Log files
+
+Find flapping in the activity log with the following query:
+
+````KQL
+// Activity log, CategoryValue: Autoscale
+// Lists latest Autoscale operations from the activity log, with OperationNameValue =="Microsoft.Insights/AutoscaleSettings/Flapping/Action
+AzureActivity
+|where CategoryValue =="Autoscale" and OperationNameValue =="Microsoft.Insights/AutoscaleSettings/Flapping/Action"
+|sort by TimeGenerated desc
+````
+
+Below is an example of an activity log record for flapping:
++
+````JSON
+{
+"eventCategory": "Autoscale",
+"eventName": "FlappingOccurred",
+"operationId": "ffd31c67-1438-47a5-bee4-1e3a102cf1c2",
+"eventProperties":
+ "{"Description":"Scale down will occur with updated instance count to avoid flapping.
+ Resource: '/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/ resourcegroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ ScaleableAppServicePlan'.
+ Current instance count: '6',
+ Intended new instance count: '1'.
+ Actual new instance count: '4'",
+ "ResourceName":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan",
+ "OldInstancesCount":6,
+ "NewInstancesCount":4,
+ "ActiveAutoscaleProfile":{"Name":"Auto created scale condition",
+ "Capacity":{"Minimum":"1","Maximum":"30","Default":"1"},
+ "Rules":[{"MetricTrigger":{"Name":"Requests","Namespace":"microsoft.web/sites","Resource":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","ResourceLocation":"West Central US","TimeGrain":"PT1M","Statistic":"Average","TimeWindow":"PT1M","TimeAggregation":"Maximum","Operator":"GreaterThanOrEqual","Threshold":3.0,"Source":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","MetricType":"MDM","Dimensions":[],"DividePerInstance":true},"ScaleAction":{"Direction":"Increase","Type":"ChangeCount","Value":"10","Cooldown":"PT1M"}},{"MetricTrigger":{"Name":"Requests","Namespace":"microsoft.web/sites","Resource":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","ResourceLocation":"West Central US","TimeGrain":"PT1M","Statistic":"Max","TimeWindow":"PT1M","TimeAggregation":"Maximum","Operator":"LessThan","Threshold":3.0,"Source":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","MetricType":"MDM","Dimensions":[],"DividePerInstance":true},"ScaleAction":{"Direction":"Decrease","Type":"ChangeCount","Value":"5","Cooldown":"PT1M"}}]}}",
+"eventDataId": "b23ae911-55d0-4881-8684-fc74227b2ddb",
+"eventSubmissionTimestamp": "2022-09-13T07:20:41.1589076Z",
+"resource": "scaleableappserviceplan",
+"resourceGroup": "ED-RG-001",
+"resourceProviderValue": "MICROSOFT.WEB",
+"subscriptionId": "D1234567-9876-A1B2-A2B1-123A567B9F876",
+"activityStatusValue": "Succeeded"
+}
+````
+
+## Next steps
+
+To learn more about autoscale, see the following resources:
+
+* [Overview of common autoscale patterns](/azure/azure-monitor/autoscale/autoscale-common-scale-patterns)
+* [Automatically scale a virtual machine scale](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Use autoscale actions to send email and webhook alert notifications](/azure/azure-monitor/autoscale/autoscale-webhook-email)
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
You can make changes in JSON directly, if necessary. These changes will be refle
### Cool-down period effects
-Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Autoscale best practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
+Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Flapping in Autoscale](autoscale-flapping.md) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
## Route traffic to healthy instances (App Service)
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
- Title: Continuous monitoring with Azure Monitor | Microsoft Docs
-description: Describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows.
--- Previously updated : 06/07/2022----
-# Continuous monitoring with Azure Monitor
-
-Continuous monitoring refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production. Continuous monitoring builds on the concepts of continuous integration and continuous deployment (CI/CD). CI/CD helps you develop and deliver software faster and more reliably to provide continuous value to your users.
-
-[Azure Monitor](overview.md) is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises. It works seamlessly with [Visual Studio and Visual Studio Code](https://visualstudio.microsoft.com/) during development and test. It integrates with [Azure DevOps](/azure/devops/user-guide/index) for release management and work item management during deployment and operations. It even integrates across the IT system management (ITSM) and SIEM tools of your choice to help track issues and incidents within your existing IT processes.
-
-This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. Links to other documentation provide information on implementing different features.
-
-## Enable monitoring for all your applications
-
-To gain observability across your entire environment, you need to enable monitoring on all your web applications and services. This way, you can easily visualize end-to-end transactions and connections across all the components. For example:
--- [Azure DevOps projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository. You can also choose from one of the sample applications to create a CI/CD pipeline to Azure.-- [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or roll back your deployment based on monitoring data.-- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Application Insights, without having to modify or redeploy your code.-- If you have access to the code for your application, enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). Full monitoring allows you to specify custom events, metrics, or page views that are relevant to your application and your business.-
-## Enable monitoring for your entire infrastructure
-
-Applications are only as reliable as their underlying infrastructure. Having monitoring enabled across your entire infrastructure will help you achieve full observability and make it easier to discover a potential root cause when something fails. Azure Monitor helps you track the health and performance of your entire hybrid infrastructure including resources such as VMs, containers, storage, and network. For example, you can:
--- Get [platform metrics, activity logs, and diagnostics logs](data-sources.md) automatically from most of your Azure resources with no configuration.-- Enable deeper monitoring for VMs with [VM insights](vm/vminsights-overview.md).-- Enable deeper monitoring for Azure Kubernetes Service (AKS) clusters with [Container insights](containers/container-insights-overview.md).-- Add [monitoring solutions](./monitor-reference.md) for different applications and services in your environment.-
-[Infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code) is the management of infrastructure in a descriptive model, using the same versioning that DevOps teams use for source code. It adds reliability and scalability to your environment and allows you to use similar processes that are used to manage your applications. For example, you can:
--- Use [Azure Resource Manager templates](./logs/resource-manager-workspace.md) to enable monitoring and configure alerts over a large set of resources.-- Use [Azure Policy](../governance/policy/overview.md) to enforce different rules over your resources. Azure Policy ensures that those resources stay compliant with your corporate standards and service level agreements.-
-## Combine resources in Azure resource groups
-
-A typical application on Azure today includes multiple resources such as VMs and app services or microservices hosted on Azure Cloud Services, AKS clusters, or Azure Service Fabric. These applications frequently use dependencies like Azure Event Hubs, Azure Storage, Azure SQL, and Azure Service Bus. For example, you can:
--- Combine resources in Azure resource groups to get full visibility across all your resources that make up your different applications. [Resource group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.-
-## Ensure quality through continuous deployment
-
-CI/CD allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production. For example, you can:
--- Use [Azure Pipelines](/azure/devops/pipelines) to implement continuous deployment and automate your entire process from code commit to production based on your CI/CD tests.-- Use [quality gates](/azure/devops/pipelines/release/approvals/gates) to integrate monitoring into your pre-deployment or post-deployment. Quality gates ensure that you're meeting the key health and performance metrics, also known as KPIs, as your applications move from development to production. They also ensure that any differences in the infrastructure environment or scale aren't negatively affecting your KPIs.-- [Maintain separate monitoring instances](./app/separate-resources.md) between your different deployment environments, such as Dev, Test, Canary, and Prod. Separate monitoring instances ensure that collected data is relevant across the associated applications and infrastructure. If you need to correlate data across environments, you can use [multi-resource charts in metrics explorer](./essentials/metrics-charts.md) or create [cross-resource queries in Azure Monitor](logs/cross-workspace-query.md).-
-## Create actionable alerts with actions
-
-A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues. For example, you can:
--- Create [alerts in Azure Monitor](./alerts/alerts-overview.md) based on logs and metrics to identify predictable failure states. You should have a goal of making all alerts actionable, which means that they represent actual critical conditions and seek to reduce false positives. Use [dynamic thresholds](alerts/alerts-dynamic-thresholds.md) to automatically calculate baselines on metric data rather than defining your own static thresholds.-- Define actions for alerts to use the most effective means of notifying your administrators. Available [actions for notification](alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) are SMS, emails, push notifications, or voice calls.-- Use more advanced actions to [connect to your ITSM tool](alerts/itsmc-overview.md) or other alert management systems through [webhooks](alerts/activity-log-alerts-webhook.md).-- Remediate situations identified in alerts as well with [Azure Automation runbooks](../automation/automation-webhooks.md) or [Azure Logic Apps](/connectors/custom-connectors/create-webhook-trigger) that can be launched from an alert by using webhooks.-- Use [autoscaling](./autoscale/tutorial-autoscale-performance-schedule.md) to dynamically increase and decrease your compute resources based on collected metrics.-
-## Prepare dashboards and workbooks
-
-Ensuring that your development and operations have access to the same telemetry and tools allows them to view patterns across your entire environment and minimize your mean time to detect and mean time to restore. For example, you can:
--- Prepare [custom dashboards](./app/tutorial-app-dashboards.md) based on common metrics and logs for the different roles in your organization. Dashboards can combine data from all Azure resources.-- Prepare [workbooks](./visualize/workbooks-overview.md) to ensure knowledge sharing between development and operations. Workbooks could be prepared as dynamic reports with metric charts and log queries. They can also be troubleshooting guides prepared by developers to help customer support or operations handle basic problems.-
-## Continuously optimize
-
- Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your business and add new data points in the next deployment as required. For example, you can:
--- Use tools in Application Insights to [track user behavior and engagement](./app/tutorial-users.md).-- Use [Impact analysis](./app/usage-impact.md) to help you prioritize which areas to focus on to drive to important KPIs.-
-## Next steps
--- Learn about the difference components of [Azure Monitor](overview.md).-- [Add continuous monitoring](./app/continuous-monitoring.md) to your release pipeline.
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Since the Dependency agent works at the kernel level, support is also dependent
## Next steps
-If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
+If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-monitor Vminsights Migrate From Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md
[Azure Monitor VM insights](../vm/vminsights-overview.md) monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. This article explains how to migrate from [Service Map](../vm/service-map.md) to Azure Monitor VM insights, which provides a map feature similar to Service Map, along with other benefits. > [!NOTE]
-> Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring the communication between services.
+> Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring processes and dependencies for your virtual machines.
The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range. For more information about the benefits of the VM insights map feature over Service Map, see [How is VM insights Map feature different from Service Map?](/azure/azure-monitor/faq#how-is-vm-insights-map-feature-different-from-service-map-).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 09/09/2022 Last updated : 09/15/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* France Central * Germany West Central * Japan East
+* Japan West
* North Central US * North Europe * Norway East
azure-resource-manager Create Troubleshooting Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/create-troubleshooting-template.md
Title: Create a troubleshooting template
description: Describes how to create a template to troubleshoot Azure resource deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 11/02/2021 Last updated : 09/14/2022 # Create a troubleshooting template
The following ARM template and Bicep file get information from an existing stora
"resources": [], "outputs": { "exampleOutput": {
- "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageName')), '2021-04-01')]",
+ "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageName')), '2022-05-01')]",
"type": "object" } }
In Bicep, use the `existing` keyword and run the deployment from the resource gr
```bicep param storageName string
-resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' existing = {
+resource stg 'Microsoft.Storage/storageAccounts@2022-05-01' existing = {
name: storageName }
azure-resource-manager Enable Debug Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/enable-debug-logging.md
Title: Enable debug logging
description: Describes how to enable debug logging to troubleshoot Azure resources deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue Previously updated : 06/20/2022 Last updated : 09/14/2022
The `DeploymentDebugLogLevel` parameter is available for other deployment scopes
# [Azure CLI](#tab/azure-cli)
-You can't enable debug logging with Azure CLI but you can get debug logging data using the `request` and `response` properties.
+You can't enable debug logging with Azure CLI but you can get the debug log's data using the `request` and `response` properties.
azure-resource-manager Find Error Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/find-error-code.md
Title: Find error codes
description: Describes how to find error codes to troubleshoot Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 05/16/2022 Last updated : 09/14/2022
When an Azure resource deployment fails using Azure Resource Manager templates (
There are three types of errors that are related to a deployment: -- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. Your editor can identify these errors.
+- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. A code editor like Visual Studio Code can identify these errors.
- **Preflight validation errors** occur when a deployment command is run but resources aren't deployed. These errors are found without starting the deployment. For example, if a parameter value is incorrect, the error is found in preflight validation.-- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress.
+- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress in your Azure environment.
All types of errors return an error code that you use to troubleshoot the deployment. Validation and preflight errors are shown in the activity log but don't appear in your deployment history. A Bicep file with syntax errors doesn't compile into JSON and isn't shown in the activity log.
To identify syntax errors, you can use [Visual Studio Code](https://code.visuals
## Validation errors
-Templates are validated during the deployment process and error codes are displayed. Before you run a deployment, you can run validation tests with Azure PowerShell or Azure CLI to identify validation and preflight errors.
+Templates are validated during the deployment process and error codes are displayed. Before you run a deployment, you can identify validation and preflight errors by running validation tests with Azure PowerShell or Azure CLI.
# [Portal](#tab/azure-portal)
bicep build main.bicep
unexpected new line character. ```
-There are more PowerShell cmdlets available to validate deployment templates:
+### Other scopes
-- [Test-AzDeployment](/powershell/module/az.resources/test-azdeployment) for subscription level deployments.-- [Test-AzManagementGroupDeployment](/powershell/module/az.resources/test-azmanagementgroupdeployment)-- [Test-AzTenantDeployment](/powershell/module/az.resources/test-aztenantdeployment)
+There are Azure PowerShell cmdlets to validate deployment templates for the subscription, management group, and tenant scopes.
+
+| Scope | Cmdlets |
+| - | - |
+| Subscription | [Test-AzDeployment](/powershell/module/az.resources/test-azdeployment) |
+| Management group | [Test-AzManagementGroupDeployment](/powershell/module/az.resources/test-azmanagementgroupdeployment) |
+| Tenant | [Test-AzTenantDeployment](/powershell/module/az.resources/test-aztenantdeployment) |
# [Azure CLI](#tab/azure-cli)
az deployment group validate \
unexpected new line character. ```
-There are more Azure CLI commands available to validate deployment templates:
+### Other scopes
+
+There are Azure CLI commands to validate deployment templates for the subscription, management group, and tenant scopes.
+
+| Scope | Commands |
+| - | - |
+| Subscription | [az deployment sub validate](/cli/azure/deployment/sub#az-deployment-sub-validate) |
+| Management group | [az deployment mg validate](/cli/azure/deployment/mg#az-deployment-mg-validate) |
+| Tenant | [az deployment tenant validate](/cli/azure/deployment/tenant#az-deployment-tenant-validate) |
-- [az deployment sub validate](/cli/azure/deployment/sub#az-deployment-sub-validate)-- [az deployment mg validate](/cli/azure/deployment/mg#az-deployment-mg-validate)-- [az deployment tenant validate](/cli/azure/deployment/tenant#az-deployment-tenant-validate)
Get-AzResourceGroupDeployment `
-ResourceGroupName examplegroup ```
+### Other scopes
+
+There are Azure PowerShell cmdlets to get deployment information for the subscription, management group, and tenant scopes.
+
+| Scope | Cmdlets |
+| - | - |
+| Subscription | [Get-AzDeploymentOperation](/powershell/module/az.resources/get-azdeploymentoperation) <br> [Get-AzDeployment](/powershell/module/az.resources/get-azdeployment) |
+| Management group | [Get-AzManagementGroupDeploymentOperation](/powershell/module/az.resources/get-azmanagementgroupdeploymentoperation) <br> [Get-AzManagementGroupDeployment](/powershell/module/az.resources/get-azmanagementgroupdeployment) |
+| Tenant | [Get-AzTenantDeploymentOperation](/powershell/module/az.resources/get-aztenantdeploymentoperation) <br> [Get-AzTenantDeployment](/powershell/module/az.resources/get-aztenantdeployment) |
++ # [Azure CLI](#tab/azure-cli) To see a deployment's operations messages with Azure CLI, use [az deployment operation group list](/cli/azure/deployment/operation/group#az-deployment-operation-group-list).
az deployment group show \
--name exampledeployment ```
+### Other scopes
+
+There are Azure CLI commands to get deployment information for the subscription, management group, and tenant scopes.
+
+| Scope | Commands |
+| - | - |
+| Subscription | [az deployment operation sub list](/cli/azure/deployment/operation/sub#az-deployment-operation-sub-list) <br> [az deployment sub show](/cli/azure/deployment/sub#az-deployment-sub-show) |
+| Management group | [az deployment operation mg list](/cli/azure/deployment/operation/mg#az-deployment-operation-mg-list) <br> [az deployment mg show](/cli/azure/deployment/mg#az-deployment-mg-show) |
+| Tenant | [az deployment operation tenant list](/cli/azure/deployment/operation/tenant#az-deployment-operation-tenant-list) <br> [az deployment tenant show](/cli/azure/deployment/tenant#az-deployment-tenant-show) |
+ ## Next steps
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/overview.md
Title: Overview of ARM template and Bicep file troubleshooting
-description: Describes troubleshooting for Azure resource deployment with Azure Resource Manager templates (ARM templates) and Bicep files.
+ Title: Overview of deployment troubleshooting for Bicep files and ARM templates
+description: Describes deployment troubleshooting when you use Bicep files or Azure Resource Manager templates (ARM templates) to deploy Azure resources.
Previously updated : 10/26/2021 Last updated : 09/14/2022 # What is deployment troubleshooting?
-When you deploy Bicep files or Azure Resource Manager templates (ARM templates), you may get an error. This documentation helps you find possible solutions for the error.
+When you deploy Azure resources with Bicep files or Azure Resource Manager templates (ARM templates), you may get an error. There are troubleshooting tools available to help you resolve syntax errors before deployment. You can get more information about error codes and deployment errors from the Azure portal, Azure PowerShell, and Azure CLI. This documentation helps you find solutions to troubleshoot errors.
## Error types
-There are two types of errors you can get - **validation errors** and **deployment errors**.
+Validation errors occur before a deployment begins and are caused by incorrect syntax that can be identified by a code editor like Visual Studio Code. For example, a misspelled property name or a function that's missing an argument.
-Validation errors happen before the deployment is started. These errors can be determined without interacting with your current Azure environment. For example, validation makes you aware of syntax errors or missing arguments for a function before your deployment starts.
+Preflight validation errors occur when a deployment command is run but resources aren't deployed in Azure. For example, if an incorrect parameter value is used, the deployment command returns an error message.
Deployment errors can only be determined by attempting the deployment and interacting with your Azure environment. For example, a virtual machine (VM) requires a network interface card (NIC). If the NIC doesn't exist when the VM is deployed, you get a deployment error. ## Troubleshooting tools
-To help identify syntax errors before a deployment, use the latest version of [Visual Studio Code](https://code.visualstudio.com). Install the latest version of either:
+There are several troubleshooting tools available to resolve errors.
-* [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep)
-* [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
+### Syntax errors
-To troubleshoot deployments, it's helpful to learn about a resource provider's properties or API versions. For more information, see [Define resources with Bicep and ARM templates](/azure/templates).
+To help identify syntax errors before a deployment, use the latest version of [Visual Studio Code](https://code.visualstudio.com). Install the latest version of the extension for Bicep or ARM templates.
+
+- [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep)
+- [Azure Resource Manager Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
+
+To follow best practices for developing your deployment templates, use the following tools:
-To follow best practices for developing your templates, use either:
+- [Bicep linter](../bicep/linter.md)
+- [ARM template test toolkit](../templates/test-toolkit.md)
+
+### Resource provider and API version
+
+To troubleshoot deployments, it's helpful to learn about a resource provider's properties or API versions. For more information, see [Define resources with Bicep and ARM templates](/azure/templates).
-* [Bicep linter](../bicep/linter.md)
-* [ARM template test toolkit](../templates/test-toolkit.md)
+### Error details
When you deploy, you can find the cause of errors from the Azure portal in a resource group's **Deployments** or **Activity log**. If you're using Azure PowerShell, use commands like [Get-AzResourceGroupDeploymentOperation](/powershell/module/az.resources/get-azresourcegroupdeploymentoperation) and [Get-AzActivityLog](/powershell/module/az.monitor/get-azactivitylog). For Azure CLI, use commands like [az deployment operation group](/cli/azure/deployment/operation/group) and [az monitor activity-log list](/cli/azure/monitor/activity-log#az-monitor-activity-log-list). ## Next steps
+- To learn more about how to find deployment error codes and troubleshoot deployment problems, see [Find error codes](find-error-code.md).
- For solutions based on the error code, see [Troubleshoot common Azure deployment errors](common-deployment-errors.md).-- For an introduction to finding the error code, see [Quickstart: Troubleshoot ARM template deployments](quickstart-troubleshoot-arm-deployment.md) or [Quickstart: Troubleshoot Bicep file deployments](quickstart-troubleshoot-bicep-deployment.md).
+- For an introduction to finding the error code, see [Quickstart: Troubleshoot ARM template JSON deployments](quickstart-troubleshoot-arm-deployment.md) or [Quickstart: Troubleshoot Bicep file deployments](quickstart-troubleshoot-bicep-deployment.md).
azure-resource-manager Quickstart Troubleshoot Arm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md
Title: Troubleshoot ARM template JSON deployments description: Learn how to troubleshoot Azure Resource Manager template (ARM template) JSON deployments. Previously updated : 12/08/2021 Last updated : 09/14/2022
This quickstart describes how to troubleshoot Azure Resource Manager template (A
There are three types of errors that are related to a deployment: -- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. Your editor can identify these errors.
+- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. A code editor like Visual Studio Code can identify these errors.
- **Preflight validation errors** occur when a deployment command is run but resources aren't deployed. These errors are found without starting the deployment. For example, if a parameter value is incorrect, the error is found in preflight validation.-- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress.
+- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress in your Azure environment.
All types of errors return an error code that you use to troubleshoot the deployment. Validation and preflight errors are shown in the activity log but don't appear in your deployment history.
The template fails preflight validation and the deployment isn't run. The `prefi
Storage names must be between 3 and 24 characters and use only lowercase letters and numbers. The prefix value created an invalid storage name. For more information, see [Resolve errors for storage account names](error-storage-account-name.md). To fix the preflight error, use a prefix that's 11 characters or less and contains only lowercase letters or numbers.
-Because the deployment didn't run there's no deployment history.
+Because the deployment didn't run, there's no deployment history.
:::image type="content" source="media/quickstart-troubleshoot-arm-deployment/preflight-no-deploy.png" alt-text="Screenshot of resource group overview that shows no deployment for preflight error.":::
The deployment begins and is visible in the deployment history. The deployment f
:::image type="content" source="media/quickstart-troubleshoot-arm-deployment/deployment-failed.png" alt-text="Screenshot of resource group overview that shows a failed deployment.":::
-To fix the deployment error you would change the reference function to use a valid resource. For more information, see [Resolve resource not found errors](error-not-found.md). For this quickstart, delete the comma that precedes `vnetResult` and all of `vnetResult`. Save the file and rerun the deployment.
+To fix the deployment error, change the reference function to use a valid resource. For more information, see [Resolve resource not found errors](error-not-found.md). For this quickstart, delete the comma that precedes `vnetResult` and all of `vnetResult`. Save the file and rerun the deployment.
```json "vnetResult": {
azure-resource-manager Quickstart Troubleshoot Bicep Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-bicep-deployment.md
Title: Troubleshoot Bicep file deployments description: Learn how to monitor and troubleshoot Bicep file deployments. Shows activity logs and deployment history. Previously updated : 11/04/2021 Last updated : 09/14/2022
This quickstart describes how to troubleshoot Bicep file deployment errors. You'
There are three types of errors that are related to a deployment: -- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. Your editor can identify these errors.
+- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. A code editor like Visual Studio Code can identify these errors.
- **Preflight validation errors** occur when a deployment command is run but resources aren't deployed. These errors are found without starting the deployment. For example, if a parameter value is incorrect, the error is found in preflight validation.-- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress.
+- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress in your Azure environment.
All types of errors return an error code that you use to troubleshoot the deployment. Validation and preflight errors are shown in the activity log but don't appear in your deployment history. A Bicep file with syntax errors doesn't compile into JSON and isn't shown in the activity log.
When you hover over `parameter`, you see an error message.
:::image type="content" source="media/quickstart-troubleshoot-bicep-deployment/declaration-not-recognized.png" alt-text="Screenshot of error message in Visual Studio Code.":::
-The message states: "This declaration type is not recognized. Specify a parameter, variable, resource, or output declaration." If you attempt to deploy this file, you'll get the same error message from the deployment command.
+The message states: _This declaration type is not recognized. Specify a parameter, variable, resource, or output declaration._ If you attempt to deploy this file, you'll get the same error message from the deployment command.
If you look at the documentation for a [parameter declaration](../bicep/parameters.md), you'll see the keyword is actually `param`. When you change that syntax, the validation error disappears. The `@allowed` decorator was also marked as an error, but that error is also resolved by changing the parameter declaration. The decorator was marked as an error because it expects a parameter declaration after the decorator. This condition wasn't true when the declaration was incorrect.
azure-signalr Concept Service Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-service-mode.md
Title: Service mode in Azure SignalR Service
-description: An overview of different service modes in Azure SignalR Service, explain their differences and applicable user scenarios
+description: An overview of service modes in Azure SignalR Service.
Previously updated : 08/19/2020 Last updated : 09/01/2022 # Service mode in Azure SignalR Service
-Service mode is an important concept in Azure SignalR Service. When you create a new SignalR resource, you will be asked to specify a service mode:
+Service mode is an important concept in Azure SignalR Service. SignalR Service currently supports three service modes: *Default*, *Serverless*, and *Classic*. Your SignalR Service resource will behave differently in each mode. In this article, you'll learn how to choose the right service mode based on your scenario.
+## Setting the service mode
-You can also change it later in the settings menu:
+You'll be asked to specify a service mode when you create a new SignalR resource in the Azure portal.
++
+You can also change the service mode later in the settings menu.
:::image type="content" source="media/concept-service-mode/update.png" alt-text="Update service mode":::
-Azure SignalR Service currently supports three service modes: **default**, **serverless** and **classic**. Your SignalR resource will behave differently in different modes. In this article, you'll learn their differences and how to choose the right service mode based on your scenario.
+Use `az signalr create` and `az signalr update` to set or change the service mode by using the [Azure SignalR CLI](/cli/azure/service-page/azure%20signalr).
## Default mode
-Default mode is the default value for service mode when you create a new SignalR resource. In this mode, your application works as a typical ASP.NET Core (or ASP.NET) SignalR application, where you have a web server that hosts a hub (called hub server hereinafter) and clients can have duplex real-time communication with the hub server. The only difference is instead of connecting client and server directly, client and server both connect to SignalR service and use the service as a proxy. Below is a diagram that illustrates the typical application structure in default mode:
+As the name implies, *Default* mode is the default service mode for SignalR Service. In Default mode, your application works as a typical [ASP.NET Core SignalR](/aspnet/core/signalr/introduction) or ASP.NET SignalR (deprecated) application. You have a web server application that hosts a hub, called a *hub server*, and clients have full duplex communication with the hub server. The difference between ASP.NET Core SignalR and Azure SignalR Service is instead of connecting client and hub server directly, client and server both connect to SignalR Service and use the service as a proxy. The following diagram shows the typical application structure in Default mode.
-So if you have a SignalR application and want to integrate with SignalR service, default mode should be the right choice for most cases.
+Default mode is usually the right choice when you have a SignalR application that you want to use with SignalR Service.
-### Connection routing in default mode
+### Connection routing in Default mode
-In default mode, there will be websocket connections between hub server and SignalR service (called server connections). These connections are used to transfer messages between server and client. When a new client is connected, SignalR service will route the client to one hub server (assume you have more than one server) through existing server connections. Then the client connection will stick to the same hub server during its lifetime. When client sends messages, they always go to the same hub server. With this behavior, you can safely maintain some states for individual connections on your hub server. For example, if you want to stream something between server and client, you don't need to consider the case that data packets go to different servers.
+In Default mode, there are WebSocket connections between hub server and SignalR Service called *server connections*. These connections are used to transfer messages between a server and client. When a new client is connected, SignalR Service will route the client to one hub server (assume you've more than one server) through existing server connections. The client connection will stick to the same hub server during its lifetime. This property is referred to as *connection stickiness*. When the client sends messages, they always go to the same hub server. With stickiness behavior, you can safely maintain some states for individual connections on your hub server. For example, if you want to stream something between server and client, you don't need to consider the case where data packets go to different servers.
> [!IMPORTANT]
-> This also means in default mode a client cannot connect without server being connected first. If all your hub servers are disconnected due to network interruption or server reboot, your client connections will get an error telling you no server is connected. So it's your responsibility to make sure at any time there is at least one hub server connected to SignalR service (for example, have multiple hub servers and make sure they won't go offline at the same time for things like maintenance).
+> In Default mode a client cannot connect without a hub server being connected to the service first. If all your hub servers are disconnected due to network interruption or server reboot, your client connections will get an error telling you no server is connected. It's your responsibility to make sure there is always at least one hub server connected to SignalR service. For example, you can design your application with multiple hub servers, and then make sure they won't all go offline at the same time.
-This routing model also means when a hub server goes offline, the connections routed that server will be dropped. So you should expect connection drop when your hub server is offline for maintenance and handle reconnect properly so that it won't have negative impact to your application.
+The default routing model also means when a hub server goes offline, the connections routed to that server will be dropped. You should expect connections to drop when your hub server is offline for maintenance, and handle reconnection to minimize the effects on your application.
+
+> [!NOTE]
+> In Default mode you can also use REST API, management SDK, and function binding to directly send messages to a client if you don't want to go through a hub server. In Default mode client connections are still handled by hub servers and upstream endpoints won't work in that mode.
## Serverless mode
-In Serverless mode, you don't have a hub server. Unlike default mode, the client doesn't require a hub server to be running. All connections are connected in a "serverless" mode and the Azure SignalR service is responsible for maintaining client connections like handling client pings (in default mode this is handled by hub servers).
+Unlike Default mode, Serverless mode doesn't require a hub server to be running, which is why this mode is named "serverless." SignalR Service is responsible for maintaining client connections. There's no guarantee of connection stickiness and HTTP requests may be less efficient than WebSockets connections.
+
+Serverless mode works with Azure Functions to provide real time messaging capability. Clients work with [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md), called *function binding*, to send messages as an output binding.
-Also there is no server connection in this mode (if you try to use service SDK to establish server connection, you will get an error). Therefore there is also no connection routing and server-client stickiness (as described in the default mode section). But you can still have server-side application to push messages to clients. This can be done in two ways, use [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) for one-time send, or through a websocket connection so that you can send multiple messages more efficiently (note this websocket connection is different than server connection).
+Because there's no server connection, if you try to use a server SDK to establish a server connection you'll get an error. SignalR Service will reject server connection attempts in Serverless mode.
+
+Serverless mode doesn't have connection stickiness, but you can still have a server-side application push messages to clients. There are two ways to push messages to clients in Serverless mode:
+
+- Use [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) for a one-time send event, or
+- Use a WebSocket connection so that you can send multiple messages more efficiently. This WebSocket connection is different than a server connection.
> [!NOTE]
-> Both REST API and websocket way are supported in SignalR service [management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). If you're using a language other than .NET, you can also manually invoke the REST APIs following this [spec](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md).
->
-> If you're using Azure Functions, you can use [SignalR service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md) (hereinafter called function binding) to send messages as an output binding.
+> Both REST API and WebSockets are supported in SignalR service [management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). If you're using a language other than .NET, you can also manually invoke the REST APIs following this [specification](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md).
-It's also possible for your server application to receive messages and connection events from clients. Service will deliver messages and connection events to preconfigured endpoints (called Upstream) using webhooks. Comparing to default mode, there is no guarantee of stickiness and HTTP requests may be less efficient than websocket connections.
+It's also possible for your server application to receive messages and connection events from clients. SignalR Service will deliver messages and connection events to pre-configured endpoints (called *upstream endpoints*) using web hooks. Upstream endpoints can only be configured in Serverless mode. For more information, see [Upstream settings](concept-upstream.md).
-For more information about how to configure upstream, see this [doc](./concept-upstream.md).
-Below is a diagram that illustrates how serverless mode works:
+The following diagram shows how Serverless mode works.
-> [!NOTE]
-> Please note in default mode you can also use REST API/management SDK/function binding to directly send messages to client if you don't want to go through hub server. But in default mode client connections are still handled by hub servers and upstream won't work in that mode.
## Classic mode
-Classic is a mixed mode of default and serverless mode. In this mode, connection mode is decided by whether there is hub server connected when client connection is established. If there is hub server, client connection will be routed to a hub server. Otherwise it will enter a serverless mode where client to server message cannot be delivered to hub server. This will cause some discrepancies, for example if all hub servers are unavailable for a short time, all client connections created during that time will be in serverless mode and cannot send messages to hub server.
- > [!NOTE]
-> Classic mode is mainly for backward compatibility for those applications created before there is default and serverless mode. It's strongly recommended to not use this mode anymore. For new applications, please choose default or serverless based on your scenario. For existing applications, it's also recommended to review your use cases and choose a proper service mode.
+> Classic mode is mainly for backward compatibility for applications created before the Default and Serverless modes were introduced. Don't use Classic mode except as a last resort. Use Default or Serverless for new applications, based on your scenario. You should consider redesigning existing applications to eliminate the need for Classic mode.
+
+Classic is a mixed mode of Default and Serverless modes. In Classic mode, connection type is decided by whether there's a hub server connected when the client connection is established. If there's a hub server, the client connection will be routed to a hub server. If a hub server isn't available, the client connection will be made in a limited serverless mode where client-to-server messages can't be delivered to a hub server. Classic mode serverless connections don't support some features such as upstream endpoints.
-Classic mode also doesn't support some new features like upstream in serverless mode.
+If all your hub servers are offline for any reason, connections will be made in Serverless mode. It's your responsibility to ensure that at least one hub server is always available.
## Choose the right service mode
-Now you should understand the differences between service modes and know how to choose between them. As you already learned in the previous section, classic mode is not encouraged and you should only choose between default and serverless. Here are some more tips that can help you make the right choice for new applications and retire classic mode for existing applications.
+Now you should understand the differences between service modes and know how to choose between them. As previously discussed, Classic mode isn't recommended for new or existing applications. Here are some more tips that can help you make the right choice for service mode and help you retire Classic mode for existing applications.
-* If you're already familiar with how SignalR library works and want to move from a self-hosted SignalR to use Azure SignalR Service, choose default mode. Default mode works exactly the same way as self-hosted SignalR (and you can use the same programming model in SignalR library), SignalR service just acts as a proxy between clients and hub servers.
+- Choose Default mode if you're already familiar with how SignalR library works and want to move from a self-hosted SignalR to use Azure SignalR Service. Default mode works exactly the same way as self-hosted SignalR, and you can use the same programming model in SignalR library. SignalR Service acts as a proxy between clients and hub servers.
-* If you're creating a new application and don't want to maintain hub server and server connections, choose serverless mode. This mode usually works together with Azure Functions so you don't need to maintain any server at all. You can still have duplex communications (with REST API/management SDK/function binding + upstream) but the programming model will be different than SignalR library.
+- Choose Serverless mode if you're creating a new application and don't want to maintain hub server and server connections. Serverless mode works together with Azure Functions so that you don't need to maintain any server at all. You can still have full duplex communications with REST API, management SDK, or function binding + upstream endpoint, but the programming model will be different than SignalR library.
-* If you have both hub servers to serve client connections and backend application to directly push messages to clients (for example through REST API), you should still choose default mode. Keep in mind that the key difference between default and serverless mode is whether you have hub servers and how client connections are routed. REST API/management SDK/function binding can be used in both modes.
+- Choose Default mode if you have *both* hub servers to serve client connections and a backend application to directly push messages to clients. The key difference between Default and Serverless mode is whether you have hub servers and how client connections are routed. REST API/management SDK/function binding can be used in both modes.
-* If you really have a mixed scenario, for example, you have two different hubs on the same SignalR resource, one used as a traditional SignalR hub and the other one used with Azure Functions and doesn't have hub server, you should really consider to separate them into two SignalR resources, one in default mode and one in serverless mode.
+- If you really have a mixed scenario, you should consider separating use cases into multiple SignalR Service instances with service mode set according to use. An example of a mixed scenario that requires Classic mode is where you have two different hubs on the same SignalR resource. One hub is used as a traditional SignalR hub and the other hub is used with Azure Functions. This example should be split into two resources, with one instance in Default mode and one in Serverless mode.
## Next steps
-To learn more about how to use default and serverless mode, read the following articles:
+See the following articles to learn more about how to use Default and Serverless modes.
-* [Azure SignalR Service internals](signalr-concept-internals.md)
+- [Azure SignalR Service internals](signalr-concept-internals.md)
-* [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
+- [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
For more information on this vCenter version, see [VMware vCenter Server 6.7 Upd
>This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses. ## Post update
-Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
+Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is currently enabled only in [standard policy](backup-during-vm-creation.md#create-a-vm-with-backup-configured) from Vault tier. It's also supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
>[!Tip]
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
+
+ Title: Batch Certificate Migration Guide
+description: Describes the migration steps for the batch certificates and the end of support details.
++++ Last updated : 08/15/2022+
+# Batch Certificate Migration Guide
+
+Securing the application and critical information has become essential in today's needs. With growing customers and increasing demand for security, managing key information plays a significant role in securing data. Many customers need to store secure data in the application, and it needs to be managed to avoid any leakage. In addition, only legitimate administrators or authorized users should access it. Azure Batch offers Certificates created and managed by the Batch service. Azure Batch also provides a Key Vault option, and it's considered an azure-standard method for delivering more controlled secure access management.
+
+Azure Batch provides certificates feature at the account level. Customers must generate the Certificate and upload it manually to the Azure Batch via the portal. To access the Certificate, it must be associated and installed for the 'Current User.' The Certificate is usually valid for one year and must follow a similar procedure every year.
+
+For Azure Batch customers, a secure way of access should be provided in a more standardized way, reducing any manual interruption and reducing the readability of key generated. Therefore, we'll retire the certificate feature on **29 February 2024** to reduce the maintenance effort and better guide customers to use Azure Key Vault as a standard and more modern method with advanced security. After it's retired, the Certificate functionality may cease working properly. Additionally, pool creation with certificates will be rejected and possibly resize up.
+
+## Retirement alternatives
+
+Azure Key Vault is the service provided by Microsoft Azure to store and manage secrets, certificates, tokens, keys, and other configuration values that authenticated users access the applications and services. The original idea was to remove the hard-coded storing of these secrets and keys in the application code.
+
+Azure Key Vault provides security at the transport layer by ensuring any data flow from the key vault to the client application is encrypted. Azure key vault stores the secrets and keys with such strong encryption that even Microsoft itself won't see the keys or secrets in any way.
+
+Azure Key Vault provides a secure way to store the information and define the fine-grained access control. All the secrets can be managed from one dashboard. Azure Key Vault can store the key in the software-protected or hardware protected by hardware security module (HSMs) mechanism. In addition, it has a mechanism to auto-renew the Key Vault certificates.
+
+## Migration steps
+
+Azure Key Vault can be created in three ways:
+
+1. Using Azure portal
+
+2. Using PowerShell
+
+3. Using CLI
+
+**Create Azure Key Vault step by step procedure using Azure portal:**
+
+__Prerequisite__: Valid Azure subscription and owner/contributor access on Key Vault service.
+
+ 1. Log in to the Azure portal.
+
+ 2. In the top-level search box, look for **Key Vaults**.
+
+ 3. In the Key Vault dashboard, click on create and provide all the details like subscription, resource group, Key Vault name, select the pricing tier (standard/premium), and select region. Once all these details are provided, click on review, and create. This will create the Key Vault account.
+
+ 4. Key Vault names need to be unique across the globe. Once any user has taken a name, it wonΓÇÖt be available for other users.
+
+ 5. Now go to the newly created Azure Key Vault. There you can see the vault name and the vault URI used to access the vault.
+
+**Create Azure Key Vault step by step using the Azure PowerShell:**
+
+ 1. Log in to the user PowerShell using the following command - Login-AzAccount
+
+ 2. Create an 'azure secure' resource group in the 'eastus' location. You can change the name and location as per your need.
+```
+ New-AzResourceGroup -Name "azuresecure" -Location "EastUS"
+```
+ 3. Create the Azure Key Vault using the cmdlet. You need to provide the key vault name, resource group, and location.
+```
+ New-AzKeyVault -Name "azuresecureKeyVault" -ResourceGroupName "azuresecure" -Location "East US"
+```
+
+ 4. Created the Azure Key Vault successfully using the PowerShell cmdlet.
+
+**Create Azure Key Vault step by step using the Azure CLI bash:**
+
+ 1. Create an 'azure secure' resource in the 'eastus' location. You can change the name and location as per your need. Use the following bash command.
+```
+ az group create ΓÇôname "azuresecure" -l "EastUS."
+```
+
+ 2. Create the Azure Key Vault using the bash command. You need to provide the key vault name, resource group, and location.
+```
+ az keyvault create ΓÇôname ΓÇ£azuresecureKeyVaultΓÇ¥ ΓÇôresource-group ΓÇ£azureΓÇ¥ ΓÇôlocation ΓÇ£EastUSΓÇ¥
+```
+ 3. Successfully created the Azure Key Vault using the Azure CLI bash command.
+
+## FAQ
+
+ 1. Is Certificates or Azure Key Vault recommended?
+ Azure Key Vault is recommended and essential to protect the data in the cloud.
+
+ 2. Does user subscription mode support Azure Key Vault?
+ Yes, it's mandatory to create Key Vault while creating the Batch account in user subscription mode.
+
+ 3. Are there best practices to use Azure Key Vault?
+ Best practices are covered [here](../key-vault/general/best-practices.md).
+
+## Next steps
+
+For more information, see [Certificate Access Control](../key-vault/certificates/certificate-access-control.md).
batch Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md
+
+ Title: Batch Pools without Public IP Addresses Classic Retirement Migration Guide
+description: Describes the migration steps for the batch pool without public ip addresses and the end of support details.
++++ Last updated : 09/01/2022+
+# Batch Pools without Public IP Addresses Classic Retirement Migration Guide
+
+By default, all the compute nodes in an Azure Batch virtual machine (VM) configuration pool are assigned a public IP address. This address is used by the Batch service to schedule tasks and for communication with compute nodes, including outbound access to the internet. To restrict access to these nodes and reduce the discoverability of these nodes from the internet, we released [Batch pools without public IP addresses (classic)](./batch-pool-no-public-ip-address.md).
+
+In late 2021, we launched a simplified compute node communication model for Azure Batch. The new communication model improves security and simplifies the user experience. Batch pools no longer require inbound Internet access and outbound access to Azure Storage, only outbound access to the Batch service. As a result, Batch pools without public IP addresses (classic) which is currently in public preview will be retired on **31 March 2023**, and will be replaced with simplified compute node communication pools without public IPs.
+
+## Retirement alternatives
+
+[Simplified Compute Node Communication Pools without Public IPs](./simplified-node-communication-pool-no-public-ip.md) requires using simplified compute node communication. It provides customers with enhanced security for their workload environments on network isolation and data exfiltration to Azure Batch accounts. Its key benefits include:
+
+* Allow creating simplified node communication pool without public IP addresses.
+* Support Batch private pool using a new private endpoint (sub-resource nodeManagement) for Azure Batch account.
+* Simplified private link DNS zone for Batch account private endpoints: changed from **privatelink.\<region>.batch.azure.com** to **privatelink.batch.azure.com**.
+* Mutable public network access for Batch accounts.
+* Firewall support for Batch account public endpoints: configure IP address network rules to restrict public network access with Batch accounts.
+
+## Migration steps
+
+Batch pool without public IP addresses (classic) will retire on **31/2023 and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication:
+
+1. Opt in to [use simplified compute node communication](./simplified-compute-node-communication.md#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication).
+
+ ![Support Request](../batch/media/certificates/opt-in.png)
+
+2. Create a private endpoint for Batch node management in the virtual network.
+
+ ![Create Endpoint](../batch/media/certificates/private-endpoint.png)
+
+3. Scale down the pool to zero nodes.
+
+ ![Scale Down](../batch/media/certificates/scale-down-pool.png)
+
+4. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
+
+ ![Scale Out](../batch/media/certificates/scale-out-pool.png)
+
+## FAQ
+
+* How can I migrate my Batch pool without public IP addresses (classic) to simplified compute node communication pools without public IPs?
+
+ You can only migrate your pool to simplified compute node communication pools if it was created in a virtual network. Otherwise, youΓÇÖd need to create a new simplified compute node communication pool without public IPs.
+
+* What differences will I see in billing?
+
+ Compared with Batch pools without public IP addresses (classic), the simplified compute node communication pools without public IPs support will reduce costs because it wonΓÇÖt need to create network resources the following: load balancer, network security groups, and private link service with the Batch pool deployments. However, there will be a [cost associated with private link](https://azure.microsoft.com/pricing/details/private-link/) or other outbound network connectivity used by pools, as controlled by the user, to allow communication with the Batch service without public IP addresses.
+
+* Will there be any performance changes?
+
+ No known performance differences compared to Batch pools without public IP addresses (classic).
+
+* How can I connect to my pool nodes for troubleshooting?
+
+ Similar to Batch pools without public IP addresses (classic). As there is no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md).
+
+* Will there be any change to how my workloads are downloaded from Azure Storage?
+
+ Similar to Batch pools without public IP addresses (classic), users will need to provide their own internet outbound connectivity if their workloads need access to other resources like Azure Storage.
+
+* What if I donΓÇÖt migrate to simplified compute node communication pools without public IPs?
+
+ After **31 March 2023**, we will stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale out operations, or may be actively scaled down to zero at any point in time after that date.
+
+## Next steps
+
+For more information, refer to [Simplified compute node communication](./simplified-compute-node-communication.md).
batch Batch Tls 101 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-tls-101-migration-guide.md
+
+ Title: Batch Tls 1.0 Migration Guide
+description: Describes the migration steps for the batch TLS 1.0 and the end of support details.
++++ Last updated : 08/16/2022+
+# Batch TLS 1.0 Migration Guide
+
+Transport Layer Security (TLS) versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. They also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. There's an industry-wide push toward the exclusive use of TLS version 1.2 or later.
+
+To follow security best practices and remain in compliance with industry standards, Azure Batch will retire Batch TLS 1.0/1.1 on **31 March 2023**. Most customers have already migrated to TLS 1.2. Customers who continue to use TLS 1.0/1.1 can be identified via existing BatchOperation telemetry. Customers will need to adjust their existing workflows to ensure that they're using TLS 1.2. Failure to migrate to TLS 1.2 will break existing Batch workflows.
+
+## Migration strategy
+
+Customers must update client code before the TLS 1.0/1.1 retirement.
+
+- Customers using native WinHTTP for client code can follow this [guide](https://support.microsoft.com/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392).
+
+- Customers using .NET framework for their client code should upgrade to .NET > 4.7, that which enforces TLS 1.2 by default.
+
+- For customers on .NET framework who are unable to upgrade to > 4.7, please follow this [guide](https://docs.microsoft.com/dotnet/framework/network-programming/tls) to enforce TLS 1.2.
+
+For TLS best practices, refer to [TLS best practices for .NET framework](https://docs.microsoft.com/dotnet/framework/network-programming/tls).
+
+## FAQ
+
+* Why must we upgrade to TLS 1.2?<br>
+ TLS 1.0/1.1 has security issues that are fixed in TLS 1.2. TLS 1.2 has been available since 2008 and is the current default version in most frameworks.
+
+* What happens if I donΓÇÖt upgrade?<br>
+ After the feature retirement, our client application won't work until you upgrade.<br>
+
+* Will Upgrading to TLS 1.2 affect the performance?<br>
+ Upgrading to TLS 1.2 won't affect performance.<br>
+
+* How do I know if IΓÇÖm using TLS 1.0/1.1?<br>
+ You can check the Audit Log to determine the TLS version you're using.
+
+## Next steps
+
+For more information, see [How to enable TLS 1.2 on clients](https://docs.microsoft.com/mem/configmgr/core/plan-design/security/enable-tls-1-2-client).
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
+
+ Title: Batch job pool lifetime statistics migration guide
+description: Describes the migration steps for the batch job pool lifetime statistics and the end of support details.
++++ Last updated : 08/15/2022+
+# Batch Job Pool Lifetime Statistics Migration Guide
+
+The Azure Batch service currently supports API for Job/Pool to retrieve lifetime statistics. The API is used to get lifetime statistics for all the Pools/Jobs in the specified batch account or for a specified Pool/Job. The API collects the statistical data from when the Batch account was created until the last time updated or entire lifetime of the specified Job/Pool. Job/Pool lifetime statistics API is helpful for customers to analyze and evaluate their usage.
+
+To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes eleven VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
+
+The MapReduce aggregation logic was implemented with legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repo need to be updated frequently to accommodate ever growing load in production and to meet security/compliance requirements. In addition, since the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers aren't using the API. Batch service currently eats up all the compute and storage usage charges associated with MapReduce pools and jobs.
+
+The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we are retire Job/Pool Lifetime.
+
+Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once complete, the API will no longer work and will return an appropriate HTTP response error code back to the client.
+
+## FAQ
+
+* Is there an alternate way to view logs of Pool/Jobs?
+
+ Azure portal has various options to enable the logs, namely system logs, diagnostic logs. Refer [Monitor Batch Solutions](./monitoring-overview.md) for more information.
+
+* Can customers extract logs to their system if the API doesn't exist?
+
+ Azure portal log feature allows every customer to extract the output and error logs to their workspace. Refer [Monitor with Application Insights](./monitor-application-insights.md) for more information.
+
+## Next steps
+
+For more information, refer to [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md).
batch Low Priority Vms Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/low-priority-vms-retirement-migration-guide.md
+
+ Title: Low priority vms retirement migration guide
+description: Describes the migration steps for the low priority vms retirement and the end of support details.
++++ Last updated : 08/10/2022+
+# Low Priority VMs Retirement Migration Guide
+
+Azure Batch offers Low priority and Spot virtual machines (VMs). The virtual machines are computing instances allocated from spare capacity, offered at a highly discounted rate compared to "on-demand" VMs.
+
+Low priority VMs enable the customer to take advantage of unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. At any point in time when Azure needs the capacity back, we'll evict low-priority VMs. Therefore, the low-priority offering is excellent for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept. In addition, low-priority VMs can easily be deployed through our virtual machine scale set offering.
+
+Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and is generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned.
+
+## Retirement alternative
+
+As of May 2020, Azure offers Spot VMs in addition to Low Priority VMs. Like Low Priority, the Spot option allows the customer to purchase spare capacity at a deeply discounted price in exchange for the possibility that the VM may be evicted. Unlike Low Priority, you can use the Azure Spot option for single VMs and scale sets. Virtual machine scale sets scale up to meet demand, and when used with Spot VMs, will only allocate when capacity is available. 
+
+The Spot VMs can be evicted when Azure needs the capacity or when the price goes above your maximum price. In addition, the customer can choose to get a 30-second eviction notice and attempt to redeploy. 
+
+The other key difference is that Azure Spot pricing is variable and based on the capacity for size or SKU in an Azure region. Prices change slowly to provide stabilization. The price will never go above pay-as-you-go rates.
+
+When it comes to eviction, you have two policy options to choose between:
+
+* Stop/Deallocate (default) ΓÇô when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is ideal for cases where the state is stored on disks.
+* Delete ΓÇô when evicted, the VM and underlying disks are deleted.
+
+While similar in idea, there are a few key differences between these two purchasing options:
+
+| | **Low Priority VMs** | **Spot VMs** |
+||||
+| **Availability** | **Azure Batch** | **Single VMs, Virtual machine scale sets** |
+| **Pricing** | **Fixed pricing** | **Variable pricing with ability to set maximum price** |
+| **Eviction/Preemption** | **Preempted when Azure needs the capacity. Tasks on preempted node VMs are re-queued and run again.** | **Evicted when Azure needs the capacity or if the price exceeds your maximum. If evicted for price and afterward the price goes below your maximum, the VM will not be automatically restarted.** |
+
+## Migration steps
+
+Customers in User Subscription mode have the option to include Spot VMs using the following the steps below:
+
+1. In the Azure portal, select the Batch account and view the existing pool or create a new pool.
+2. Under **Scale**, users can choose 'Target dedicated nodes' or 'Target Spot/low-priority nodes.'
+
+ ![Scale Target Nodes](../batch/media/certificates/lowpriorityvms-scale-target-nodes.png)
+
+3. Navigate to the existing Pool and select 'Scale' to update the number of Spot nodes required based on the job scheduled.
+4. Click **Save**.
+
+Customers in Batch Managed mode must recreate the Batch account, pool, and jobs under User Subscription mode to take advantage of spot VMs.
+
+## FAQ
+
+* How to create a new Batch account /job/pool?
+
+ Refer to the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task.
+
+* Are Spot VMs available in Batch Managed mode?
+
+ No, Spot VMs are available in User Subscription mode - Batch accounts only.
+
+* What is the pricing and eviction policy of Spot VMs? Can I view pricing history and eviction rates?
+
+ Refer to [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal.
+
+## Next steps
+
+Use the [CLI](../virtual-machines/linux/spot-cli.md), [portal](../virtual-machines/spot-portal.md), [ARM template](../virtual-machines/linux/spot-template.md), or [PowerShell](../virtual-machines/windows/spot-powershell.md) to deploy Azure Spot Virtual Machines.
+
+You can also deploy a [scale set with Azure Spot Virtual Machine instances](../virtual-machine-scale-sets/use-spot.md).
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
In this how-to guide, you'll learn how to upload and install all the required co
- A [network set up for your infrastructure deployment](prepare-network.md). - A deployment of S/4HANA infrastructure. - The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment.-- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (STONITH device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure).
+- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure).
To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal.
To install the SAP software on Azure, use the ACSS installation wizard.
1. For **SAP FQDN**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
- 1. For High Availability (HA) systems only, enter the client identifier for the STONITH Fencing Agent service principal for **Fencing client ID**.
+ 1. For High Availability (HA) systems only, enter the client identifier for the Fencing Agent service principal for **Fencing client ID**.
- 1. For High Availability (HA) systems only, enter the password for the STONITH Fencing Agent service principal for **Fencing client password**.
+ 1. For High Availability (HA) systems only, enter the password for the Fencing Agent service principal for **Fencing client password**.
1. For **SSH private key**, provide the SSH private key that you created or selected as part of your infrastructure deployment.
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
Autoscale feature is available for the following
* [Computer Vision](computer-vision/index.yml) * [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
-* [Form Recognizer](/azure/applied-ai-services/form-recognizer/overview?tabs=v3-0)
+* [Form Recognizer](../applied-ai-services/form-recognizer/overview.md?tabs=v3-0)
### Can I test this feature using a free subscription?
No, the autoscale feature is not available to free tier subscriptions.
- [Plan and Manage costs for Azure Cognitive Services](./plan-manage-costs.md). - [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/certified-session-border-controllers.md
If you have any questions about the SBC certification program for Communication
|Vendor|Product|Software version| |: |: |:
-|[AudioCodes](https://www.audiocodes.com/media/lbjfezwn/mediant-sbc-with-microsoft-azure-communication-services.pdf)|Mediant SBC|7.40A
+|[AudioCodes](https://www.audiocodes.com/media/lbjfezwn/mediant-sbc-with-microsoft-azure-communication-services.pdf)|Mediant SBC VE|7.40A
|[Metaswitch](https://manuals.metaswitch.com/Perimeta/V4.9/AzureCommunicationServicesIntegrationGuide/Source/notices.html)|Perimeta SBC|4.9| |[Oracle](https://www.oracle.com/technical-resources/documentation/acme-packet.html)|Oracle Acme Packet SBC|8.4| |Ribbon Communications|[SBC SWe / SBC 5400 / SBC 7000](https://support.sonus.net/display/ALLDOC/Ribbon+Configurations+with+Azure+Communication+Services+Direct+Routing)|9.02| ||[SBC SWe Lite / SBC 1000 / SBC 2000](https://support.sonus.net/display/UXDOC90/Best+Practice+-+Configure+SBC+Edge+for+Azure+Communication+Services+Direct+Routing)|9.0
+|[TE-SYSTEMS](https://community.te-systems.de/community-download/files?fileId=9624)|anynode|4.6|
Note the certification granted to a major version. That means that firmware with any number in the SBC firmware following the major version is supported.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
The following list presents the set of features that are currently available in
| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ | | | Answer a group call | ✔️ | ✔️ | | | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
-| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ |
+| | Redirect (forward) a call to one or more endpoints | ✔️ | ✔️ |
| | Reject an incoming call | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
-| | Blind Transfer** a call to another endpoint | ✔️ | ✔️ |
+| | Blind Transfer* a call to another endpoint | ✔️ | ✔️ |
| | Hang up a call (remove the call leg) | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | | Query scenarios | Get the call state | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ |
-*Redirecting a call to a phone number is currently not supported.
-
-**Transfer of VoIP call to a phone number is currently not supported.
+*Transfer of VoIP call to a phone number is currently not supported.
## Architecture
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
Note that the tutorial above assumes that your Azure blob storage container allo
For downloading the files you upload to Azure blob storage, you can use shared access signatures (SAS). A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data.
-The downloadable [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite) showcases the use of SAS for creating SAS URLs to Azure Storage contents. Additionally, you can [read more about SAS](/azure/storage/common/storage-sas-overview).
+The downloadable [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite) showcases the use of SAS for creating SAS URLs to Azure Storage contents. Additionally, you can [read more about SAS](../../storage/common/storage-sas-overview.md).
UI Library requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
You may also want to:
- [Add chat to your app](../quickstarts/chat/get-started.md) - [Creating user access tokens](../quickstarts/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)-- [Learn about authentication](../concepts/authentication.md)
+- [Learn about authentication](../concepts/authentication.md)
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
+
+ Title: Enable soft delete policy
+description: Learn how to enable a soft delete policy in your Azure Container Registry for recovering accidentally deleted artifacts for a set retention period.
+ Last updated : 04/19/2022+++
+# Enable soft delete policy in Azure Container Registry (Preview)
+
+Azure Container Registry (ACR) allows you to enable the *soft delete policy* to recover any accidentally deleted artifacts for a set retention period.
++++
+This feature is available in all the service tiers (also known as SKUs). For information about registry service tiers, see [Azure Container Registry service tiers](container-registry-skus.md).
+
+> [!NOTE]
+>The soft deleted artifacts are billed as per active sku pricing for storage.
+
+The article gives you an overview of the soft delete policy and walks you through the step by step process to enable the soft delete policy using Azure CLI and Azure portal.
+
+You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli](/cli/azure/install-azure-cli).
+
+## Prerequisites
+
+* The user will require following permissions (at registry level) to perform soft delete operations:
+
+ | Permission | Description |
+ |||
+ | Microsoft.ContainerRegistry/registries/deleted/read | List soft-deleted artifacts |
+ | Microsoft.ContainerRegistry/registries/deleted/restore/action | Restore soft-deleted artifact |
+
+## About soft delete policy
+
+The soft delete policy can be enabled/disabled at your convenience.
+
+Once you enable the soft delete policy, ACR manages the deleted artifacts as the soft deleted artifacts with a set retention period. Thereby you have ability to list, filter, and restore the soft deleted artifacts. Once the retention period is complete, all the soft deleted artifacts are auto-purged.
+
+## Retention period
+
+The default retention period is seven days. It's possible to set the retention period value between one to 90 days. The user can set, update and change the retention policy value. The soft deleted artifacts will expire once the retention period is complete.
+
+## Auto-purge
+
+The auto-purge runs every 24 hours. The auto-purge always considers the current value of `retention days` before permanently deleting the soft deleted artifacts.
+For example, after five days of soft deleting the artifact, if the user changes the value of retention days from seven to 14 days, the artifact will only expire after 14 days from the initial soft delete.
+
+## Preview limitations
+
+* ACR currently doesn't support manually purging soft deleted artifacts.
+* The soft delete policy doesn't support a geo-replicated registry.
+* ACR doesn't allow enabling both the retention policy and the soft delete policy. See [retention policy for untagged manifests.](container-registry-retention-policy.md)
+
+## Enable soft delete policy for registry - CLI
+
+1. Update soft delete policy for a given `MyRegistry` ACR with a retention period set between 1 to 90 days.
+
+ ```azurecli-interactive
+ az acr config soft-delete update -r MyRegistry --days 7 --status <enabled/disabled>
+ ```
+
+2. Show configured soft delete policy for a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr config soft-delete show -r MyRegistry
+ ```
+
+### List the soft-delete artifacts- CLI
+
+The `az acr repository list-deleted` commands enable fetching and listing of the soft deleted repositories. For more information use `--help`.
+
+1. List the soft deleted repositories in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr repository list-deleted -n MyRegistry
+ ```
+
+The `az acr manifest list-deleted` commands enable fetching and listing of the soft delete manifests.
+
+2. List the soft deleted manifests of a `hello-world` repository in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-deleted -r MyRegistry -n hello-world
+ ```
+
+The `az acr manifest list-deleted-tags` commands enable fetching and listing of the soft delete tags.
+
+3. List the soft delete tags of a `hello-world` repository in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-deleted-tags -r MyRegistry -n hello-world
+ ```
+
+4. Filter the soft delete tags of a `hello-world` repository to match tag `latest` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-deleted-tags -r MyRegistry -n hello-world:latest
+ ```
+
+### Restore the soft delete artifacts - CLI
+
+The `az acr manifest restore` commands restore a single image by tag and digest.
+
+1. Restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest restore -r MyRegistry -n hello-world:latest -d sha256:abc123
+ ```
+
+2. Restore the most recently deleted manifest of a `hello-world` repository by tag `latest` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest restore -r MyRegistry -n hello-world:latest
+ ```
+
+Force restore will overwrite the existing tag with the same name in the repository. If the soft delete policy is enabled during force restore. The overwritten tag will be soft deleted. You can force restore with specific arguments `--force, -f`.
+
+3. Force restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest restore -r MyRegistry -n hello-world:latest -d sha256:abc123 -f
+ ```
+
+> [!IMPORTANT]
+>* Restoring a [manifest list](push-multi-architecture-images.md#manifest-list) won't recursively restore any underlying soft deleted manifests.
+>* If you're restoring soft deleted [ORAS artifacts](container-registry-oras-artifacts.md), then restoring a subject doesn't recursively restore the referrer chain. Also, the subject has to be restored first, only then a referrer manifest is allowed to restore. Otherwise it throws an error.
+
+## Enable soft delete policy for registry - Portal
+
+You can also enable a registry's soft delete policy in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+2. In the **Overview tab**, verify the status of the **Soft Delete** (Preview).
+3. If the **Status** is **Disabled**, Select **Update**.
++++++
+4. Select the checkbox to **Enable Soft Delete**.
+5. Select the number of days between `0` and `90` days to retain the soft deleted artifacts.
+6. Select **Save** to save your changes.
++++++
+### Restore the soft deleted artifacts - Portal
+
+1. Navigate to your Azure Container Registry.
+2. In the **Menu** section, Select **Services**, and Select **Repositories**.
+3. In the **Repositories**, Select your preferred **Repository**.
+4. Click on the **Manage deleted artifacts** to see all the soft deleted artifacts.
+
+> [!NOTE]
+> Once you enable the soft delete policy and perform actions such as untag a manifest or delete an artifact, You will be able to find these tags and artifacts in the Managed delete artifacts before the number of retention days expire.
++++++
+5. Filter the deleted artifact you have to restore
+6. Select the artifact, and Click on the **Restore** in the right column.
+7. A **Restore Artifact** window pops up.
++++++
+8. Select the tag to restore, here you have an option to choose, and recover any additional tags.
+9. Click on **Restore**.
++++++
+### Restore from soft deleted repositories - Portal
+
+1. Navigate to your Azure Container Registry.
+2. In the **Menu** section, Select **Services**,
+3. In the **Services** tab, Select **Repositories**.
+4. In the **Repositories** tab, Click on **Manage Deleted Repositories**.
++++++
+5. Filter the deleted repository in the **Soft Deleted Repositories**(Preview).
++++++
+6. Select the deleted repository, filter the deleted artifact from on the **Manage deleted artifacts**.
+7. Select the artifact, and Click on the **Restore** in the right column.
+8. A **Restore Artifact** window pops up.
++++++
+9. Select the tag to restore, here you have an option to choose, and recover any additional tags.
+10. Click on **Restore**.
++++++
+> [!IMPORTANT]
+>* Importing a soft deleted image at both source and target resources is blocked.
+>* Pushing an image to the soft deleted repository will restore the soft deleted repository.
+>* Pushing an image that shares a same manifest digest with the soft deleted image is not allowed. Instead restore the soft deleted image.
+
+## Next steps
+
+* Learn more about options to [delete images and repositories](container-registry-delete.md) in Azure Container Registry.
cost-management-billing Tutorial Acm Opt Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-opt-recommendations.md
The **Impact** category, along with the **Potential yearly savings**, are design
High impact recommendations include: - [Buy reserved virtual machine instances to save money over pay-as-you-go costs](../../advisor/advisor-reference-cost-recommendations.md#buy-virtual-machine-reserved-instances-to-save-money-over-pay-as-you-go-costs)-- [Optimize virtual machine spend by resizing or shutting down underutilized instances](../../advisor/advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances)
+- [Optimize virtual machine spend by resizing or shutting down underutilized instances](../../advisor/advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances)
- [Use Standard Storage to store Managed Disks snapshots](../../advisor/advisor-reference-cost-recommendations.md#use-standard-storage-to-store-managed-disks-snapshots) Medium impact recommendations include:
data-factory Concepts Parameters Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-parameters-variables.md
Last updated 09/13/2022
# Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics + This article helps you understand the difference between pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics and how to use them to control your pipeline. ## Pipeline parameters
data-factory Connector Google Sheets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-sheets.md
+
+ Title: Transform data in Google Sheets (Preview)
+
+description: Learn how to transform data in Google Sheets (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 08/30/2022++
+# Transform data in Google Sheets (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Google Sheets (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Google Sheets connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
+
+## Create a Google Sheets linked service using UI
+
+Use the following steps to create a Google Sheets linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Google Sheets (Preview) and select the Google Sheets (Preview) connector.
+
+ :::image type="content" source="media/connector-google-sheets/google-sheets-connector.png" alt-text="Screenshot showing selecting Google Sheets connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-google-sheets/configure-google-sheets-linked-service.png" alt-text="Screenshot of configuration for Google Sheets linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Google Sheets.
+
+## Linked service properties
+
+The following properties are supported for the Google Sheets linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **GoogleSheets**. | Yes |
+| apiToken | Specify an API token for the Google Sheets. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "GoogleSheetsLinkedService",
+ "properties": {
+ "type": "GoogleSheets",
+ "typeProperties": {
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read resources from Google Sheets. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
++
+### Source transformation
+
+The below table lists the properties supported by Google Sheets source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| SpreadSheet ID | The spreadsheet ID in your Google Sheets. Make sure the general access of the spreadsheet is set as **Anyone with the link**. | Yes | String | spreadSheetId |
+| Sheet name | The name of the sheet in the spreadsheet. | Yes | String | sheetName |
+| Start cell | The start cell of the sheet from where the data is required, for example A2, B4. | Yes | String | startCell |
+| End cell | The end cell of the sheet till where the data is required, for example F10, S600. | Yes | String | endCell |
+
+#### Google Sheets source script example
+
+When you use Google Sheets as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'googlesheets',
+ format: 'rest',
+ spreadSheetId: $spreadSheetId,
+ startCell: 'A2',
+ endCell: 'F10',
+ sheetName: 'Sheet1') ~> GoogleSheetsSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 08/30/2022 Last updated : 09/14/2022
For a list of data stores that are supported as sources/sinks, see [Supported da
Specifically, this generic REST connector supports: - Copying data from a REST endpoint by using the **GET** or **POST** methods and copying data to a REST endpoint by using the **POST**, **PUT** or **PATCH** methods.-- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Service Principal**, and **user-assigned managed identity**.
+- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Service Principal**, **OAuth2 Client Credential**, **System Assigned Managed Identity** and **User Assigned Managed Identity**.
- **[Pagination](#pagination-support)** in the REST APIs. - For REST as source, copying the REST JSON response [as-is](#export-json-response-as-is) or parse it by using [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping). Only response payload in **JSON** is supported.
For different authentication types, see the corresponding sections for details.
- [Basic authentication](#use-basic-authentication) - [Service Principal authentication](#use-service-principal-authentication) - [OAuth2 Client Credential authentication](#use-oauth2-client-credential-authentication)
+- [System-assigned managed identity authentication](#managed-identity)
- [User-assigned managed identity authentication](#use-user-assigned-managed-identity-authentication) - [Anonymous authentication](#using-authentication-headers)
Set the **authenticationType** property to **Basic**. In addition to the generic
} ``` + ### Use Service Principal authentication Set the **authenticationType** property to **AadServicePrincipal**. In addition to the generic properties that are described in the preceding section, specify the following properties:
Set the **authenticationType** property to **OAuth2ClientCredential**. In additi
} ```
+### <a name="managed-identity"></a> Use system-assigned managed identity authentication
+
+Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
+
+**Example**
+
+```json
+{
+ "name": "RESTLinkedService",
+ "properties": {
+ "type": "RestService",
+ "typeProperties": {
+ "url": "<REST endpoint e.g. https://www.example.com/>",
+ "authenticationType": "ManagedServiceIdentity",
+ "aadResourceId": "<AAD resource URL e.g. https://management.core.windows.net>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ### Use user-assigned managed identity authentication Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
To configure it programmatically, add the `additionalColumns` property in your c
} ] ```
+>[!TIP]
+>After configuring additional columns remember to map them to you destination sink, in the Mapping tab.
## Auto create sink tables
ddos-protection Ddos Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-disaster-recovery-guidance.md
A: The virtual network and the resources in the affected region remains inaccess
![Simple Virtual Network Diagram.](../virtual-network/media/virtual-network-disaster-recovery-guidance/vnet.png)
-**Q: What can I to do re-create the same virtual network in a different region?**
+**Q: What can I do to re-create the same virtual network in a different region?**
A: Virtual networks are fairly lightweight resources. You can invoke Azure APIs to create a VNet with the same address space in a different region. To recreate the same environment that was present in the affected region, you make API calls to redeploy the resources in the VNets that you had. If you have on-premises connectivity, such as in a hybrid deployment, you have to deploy a new VPN Gateway, and connect to your on-premises network.
To create a virtual network, see [Create a virtual network](../virtual-network/m
## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
+- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
For more information on this reference architecture, see the [Extend Azure HDIns
documentation.
-> [!NOTE]
-> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
- ## Hub-and-spoke network topology with Azure Firewall and Azure Bastion This reference architecture details a hub-and-spoke topology with Azure Firewall inside the hub as a DMZ for scenarios that require central control over security aspects. Azure Firewall is a managed firewall as a service and is placed in its own subnet. Azure Bastion is deployed and placed in its own subnet.
Azure DDoS Protection Standard is enabled on the hub virtual network. Therefore,
DDoS Protection Standard is designed for services that are deployed in a virtual network. For more information, see [Deploy dedicated Azure service into virtual networks](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network). > [!NOTE]
-> DDoS Protection Standard protects the Public IPs of Azure resource. DDoS Protection Basic, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection Standard overview](ddos-protection-overview.md).
+> DDoS Protection Standard protects the Public IPs of Azure resource. DDoS infrastructure protection, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection Standard overview](ddos-protection-overview.md).
For more information about hub-and-spoke topology, see [Hub-spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli). ## Next steps
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
na Previously updated : 08/25/2022 Last updated : 09/14/2022
Telemetry for an attack is provided through Azure Monitor in real time. While [m
You can view DDoS telemetry for a protected public IP address through three different resource types: DDoS protection plan, virtual network, and public IP address. +
+### Metrics
+
+The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
+- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.
+- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.
+- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
> [!NOTE] > While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
+The following [metrics](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) are available for Azure DDoS Protection Standard. These metrics are also exportable via diagnostic settings (see [View and configure DDoS diagnostic logging](diagnostic-logging.md)).
+
+| Metric | Metric Display Name | Unit | Aggregation Type | Description |
+| | | | | |
+| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï|
+| BytesForwardedDDoSΓÇï | Inbound bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes forwarded DDoSΓÇï |
+| BytesInDDoSΓÇï | Inbound bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes DDoSΓÇï |
+| DDoSTriggerSYNPacketsΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï |
+| DDoSTriggerTCPPacketsΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï |
+| DDoSTriggerUDPPacketsΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï |
+| IfUnderDDoSAttackΓÇï | Under DDoS attack or notΓÇï | CountΓÇï | MaximumΓÇï | Under DDoS attack or notΓÇï |
+| PacketsDroppedDDoSΓÇï | Inbound packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets dropped DDoSΓÇï |
+| PacketsForwardedDDoSΓÇï | Inbound packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets forwarded DDoSΓÇï |
+| PacketsInDDoSΓÇï | Inbound packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets DDoSΓÇï |
+| TCPBytesDroppedDDoSΓÇï | Inbound TCP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes dropped DDoSΓÇï |
+| TCPBytesForwardedDDoSΓÇï | Inbound TCP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes forwarded DDoSΓÇï |
+| TCPBytesInDDoSΓÇï | Inbound TCP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes DDoSΓÇï |
+| TCPPacketsDroppedDDoSΓÇï | Inbound TCP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets dropped DDoSΓÇï |
+| TCPPacketsForwardedDDoSΓÇï | Inbound TCP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets forwarded DDoSΓÇï |
+| TCPPacketsInDDoSΓÇï | Inbound TCP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets DDoSΓÇï |
+| UDPBytesDroppedDDoSΓÇï | Inbound UDP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes dropped DDoSΓÇï |
+| UDPBytesForwardedDDoSΓÇï | Inbound UDP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes forwarded DDoSΓÇï |
+| UDPBytesInDDoSΓÇï | Inbound UDP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes DDoSΓÇï |
+| UDPPacketsDroppedDDoSΓÇï | Inbound UDP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets dropped DDoSΓÇï |
+| UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï |
+| UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï |
+ ### View metrics from DDoS protection plan
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
When you select a data collection tier in Microsoft Defender for Cloud, the secu
The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-You maybe charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+You may be charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Learn more about [alert suppression rules](alerts-suppression-rules.md).
File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
-FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can deploy through Defender for Cloud.
+FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can [deploy through Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
## August 2022 -- **Sensor software version 22.2.5**: Minor version with stability improvements-- [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)-- [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview)
+|Service area |Updates |
+|||
+|**OT networks** |**Sensor software version 22.2.5**: Minor version with stability improvements<br><br>**Sensor software version 22.2.4**: [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)<br><br>**Sensor software version 22.1.3**: [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview) |
### New alert columns with timestamp data
event-grid Availability Zones Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zones-disaster-recovery.md
Event Grid also provides [diagnostic logs schemas](diagnostic-logs.md) and [metr
## More information
-You may find more information availability zone resiliency and disaster recovery in Azure Event Grid in our [FAQ](/azure/event-grid/event-grid-faq).
+You may find more information availability zone resiliency and disaster recovery in Azure Event Grid in our [FAQ](./event-grid-faq.yml).
## Next steps -- If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
+- If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
event-grid Configure Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-custom-topic.md
You can use similar steps to enable an identity for an event grid domain.
1. On the left menu, select **Configuration** under **Settings**. 1. 2. For **Data residency**, select whether you don't want any data to be replicated to another region (**Regional**) or you want the metadata to be replicated to a predefined secondary region (**Cross-Geo**).
- The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](/azure/event-grid/event-grid-faq).
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](./event-grid-faq.yml).
If you select the **Regional** option, you may define your own disaster recovery plan. For more information, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md).
See the following samples to learn about publishing events to and consuming even
- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/) - [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/) - [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)-- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-custom-topic.md
This article shows how to create a custom topic or a domain in Azure Event Grid. ## Prerequisites
-If you new to Azure Event Grid, read through [Event Grid overview](overview.md) before starting this tutorial.
+If you're new to Azure Event Grid, read through [Event Grid overview](overview.md) before starting this tutorial.
[!INCLUDE [event-grid-register-provider-portal.md](../../includes/event-grid-register-provider-portal.md)]
On the **Security** page of the **Create Topic** or **Create Event Grid Domain*
:::image type="content" source="./media/create-custom-topic/data-residency.png" alt-text="Screenshot showing the Data residency section of the Advanced page in the Create Topic wizard.":::
- The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](/azure/event-grid/event-grid-faq).
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](./event-grid-faq.yml).
If you select the **Regional** option, you may define your own disaster recovery plan. For more information, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md). 3. Select **Next: Tags** to move to the **Tags** page.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 06/09/2022 Last updated : 09/14/2022 # Subscribe to events published by a partner with Azure Event Grid
Here are the steps that a subscriber needs to perform to receive events from a p
You must grant your consent to the partner to create partner topics in a resource group that you designate. This authorization has an expiration time. It's effective for the time period you specify between 1 to 365 days. > [!IMPORTANT]
-> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic.
+> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time.
> [!NOTE] > Event Grid started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
Following example shows the way to create a partner configuration resource that
1. Specify authorization expiration time. 1. Select **Add**.
- :::image type="content" source="./media/subscribe-to-partner-events/add-non-verified-partner.png" alt-text="Screenshot for granting a non-verified partner the authorization to create resources in your resource group.":::
+ :::image type="content" source="./media/subscribe-to-partner-events/add-non-verified-partner.png" alt-text="Screenshot for granting a non-verified partner the authorization to create resources in your resource group.":::
+
+ > [!IMPORTANT]
+ > Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time.
1. Back on the **Create Partner Configuration** page, verify that the partner is added to the partner authorization list at the bottom. 1. Select **Review + create** at the bottom of the page.
event-hubs Apache Kafka Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-configurations.md
Property | Recommended Values | Permitted Range | Notes
Property | Recommended Values | Permitted Range | Notes |:|--:|
-`retries` | > 0 | | Default is 2. We recommend that you keep this value.
+`retries` | 2 | | Default is 2147483647.
`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.* `partitioner` | `consistent_random` | See librdkafka documentation | `consistent_random` is default and best. Empty and null keys are handled ideally for most cases. `compression.codec` | `none` || Compression currently not supported.
firewall-manager Check Point Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/check-point-overview.md
Check Point unifies multiple security services under one umbrella. Integrated se
Threat Emulation (sandboxing) protects users from unknown and zero-day threats. Check Point SandBlast Zero-Day Protection is a cloud-hosted sand-boxing technology where files are quickly quarantined and inspected. It runs in a virtual sandbox to discover malicious behavior before it enters your network. It prevents threats before the damage is done to save staff valuable time responding to threats.
+>[!NOTE]
+> This offering provides limited features compared to the [Check Point NVA integration with Virtual WAN](../virtual-wan/about-nva-hub.md#partners). We strongly recommend using this NVA integration to secure your network traffic.
## Deployment example Watch the following video to see how to deploy Check Point CloudGuard Connect as a trusted Azure security partner.
Watch the following video to see how to deploy Check Point CloudGuard Connect as
## Next steps -- [Deploy a security partner provider](deploy-trusted-security-partner.md)
+- [Deploy a security partner provider](deploy-trusted-security-partner.md)
frontdoor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/best-practices.md
This article summarizes best practices for using Azure Front Door.
### Avoid combining Traffic Manager and Front Door
-For most solutions, you should use *either* Front Door *or* [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview).
+For most solutions, you should use *either* Front Door *or* [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md).
Traffic Manager is a DNS-based load balancer. It sends traffic directly to your origin's endpoints. In contrast, Front Door terminates connections at points of presence (PoPs) near to the client and establishes separate long-lived connections to the origins. The products work differently and are intended for different use cases.
For more information, see [Select the certificate for Azure Front Door to deploy
### Use the same domain name on Front Door and your origin
-Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. The feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](/azure/app-service/configure-common#configure-general-settings) and [authentication and authorization](/azure/app-service/overview-authentication-authorization) might not work correctly.
+Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. The feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](../app-service/configure-common.md#configure-general-settings) and [authentication and authorization](../app-service/overview-authentication-authorization.md) might not work correctly.
Before you rewrite the `Host` header of your requests, carefully consider whether your application is going to work correctly.
For more information, see [Supported HTTP methods for health probes](health-prob
## Next steps
-Learn how to [create an Front Door profile](create-front-door-portal.md).
+Learn how to [create an Front Door profile](create-front-door-portal.md).
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
keywords: hadoop getting started,hadoop linux,hadoop quickstart,hive getting sta
Previously updated : 02/24/2020 Last updated : 09/15/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Azure portal and run a Hive job
hdinsight Apache Hbase Provision Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-provision-vnet.md
description: Get started using HBase in Azure HDInsight. Learn how to create HDI
Previously updated : 12/23/2019 Last updated : 09/15/2022 # Create Apache HBase clusters on HDInsight in Azure Virtual Network
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md
description: Learn how to set up HBase replication from one HDInsight version to
Previously updated : 12/06/2019 Last updated : 09/15/2022 # Set up Apache HBase cluster replication in Azure virtual networks
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md
description: Learn how to query data from Azure Data Lake Storage Gen1 and to st
Previously updated : 04/24/2020 Last updated : 09/15/2022 # Use Data Lake Storage Gen1 with Azure HDInsight clusters
hdinsight Hdinsight Os Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-os-patching.md
description: Learn how to configure OS patching schedule for Linux-based HDInsig
Previously updated : 08/30/2021 Last updated : 09/15/2022 # Configure the OS patching schedule for Linux-based HDInsight clusters
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
description: Learn how to create an Azure HDInsight cluster that uses Availabili
Previously updated : 09/01/2021 Last updated : 09/15/2022 # Create an HDInsight cluster that uses Availability Zones (Preview)
hdinsight Apache Hadoop Connect Hive Power Bi Directquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md
description: Use Microsoft Power BI to visualize Interactive Query Hive data fro
Previously updated : 06/17/2019 Last updated : 09/15/2022 # Visualize Interactive Query Apache Hive data with Microsoft Power BI using direct query in HDInsight
In this article, you learned how to visualize data from HDInsight using Microsof
* [Connect Excel to Apache Hadoop by using Power Query](../hadoop/apache-hadoop-connect-excel-power-query.md). * [Connect to Azure HDInsight and run Apache Hive queries using Data Lake Tools for Visual Studio](../hadoop/apache-hadoop-visual-studio-tools-get-started.md). * [Use Azure HDInsight Tool for Visual Studio Code](../hdinsight-for-vscode.md).
-* [Upload Data to HDInsight](./../hdinsight-upload-data.md).
+* [Upload Data to HDInsight](./../hdinsight-upload-data.md).
hdinsight Apache Kafka Connector Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md
description: Learn how to use Apache Kafka on HDInsight with Azure IoT Hub. The
Previously updated : 11/26/2019 Last updated : 09/15/2022 # Use Apache Kafka on HDInsight with Azure IoT Hub
hdinsight Apache Spark Run Machine Learning Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-run-machine-learning-automl.md
Title: Run Azure Machine Learning workloads on Apache Spark in HDInsight
description: Learn how to run Azure Machine Learning workloads with automated machine learning (AutoML) on Apache Spark in Azure HDInsight. Previously updated : 12/13/2019 Last updated : 09/15/2022 # Run Azure Machine Learning workloads with automated machine learning on Apache Spark in HDInsight
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md
The SDKs are available in **multiple languages** providing the flexibility to ch
| Language | Package | Source | Quickstarts | Samples | Reference | | :-- | :-- | :-- | :-- | :-- | :-- |
-| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
+| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-device) | | **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) | | **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) |
iot-develop Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/libraries-sdks.md
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
| Language | Package | Code Repository | Samples | Quickstart | Reference | ||||||| | C - Device | [vcpkg 1.3.9](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/setting_up_vcpkg.md) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/pnp) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/azure/iot-hub/iot-c-sdk-ref/) |
-| .NET - Device | [NuGet 1.31.0](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/iot-hub/Samples/device/PnpDeviceSamples) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
+| .NET - Device | [NuGet 1.41.2](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/solutions/PnpDeviceSamples) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
| Java - Device | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-jav) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) | | Python - Device | [pip 2.3.0](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/pnp) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/python/api/azure-iot-device/azure.iot.device) | | Node - Device | [npm 1.17.2](https://www.npmjs.com/package/azure-iot-device)  | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples/javascript/) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/javascript/api/azure-iot-device/) |
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
| Platform | Package | Code Repository | Samples | Quickstart | Reference | |||||||
-| .NET - IoT Hub service | [NuGet 1.27.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/iot-hub/Samples/service/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) |
+| .NET - IoT Hub service | [NuGet 1.38.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/solutions/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) |
| Java - IoT Hub service | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client/1.26.0) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | N/A | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node - IoT Hub service | [npm 1.13.0](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | N/A | [Reference](/javascript/api/azure-iothub/) | | Python - IoT Hub service | [pip 2.2.3](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | N/A | [Reference](/python/api/azure-iot-hub/) |
iot-develop Tutorial Migrate Device To Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md
Add a module called **my-module** to the **my-module-device**:
If you haven't already done so, clone the Azure IoT Hub Device C# SDK GitHub repository to your local machine:
-Open a command prompt in a folder of your choice. Use the following command to clone the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository into this location:
+Open a command prompt in a folder of your choice. Use the following command to clone the [Azure IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository into this location:
```cmd
-git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ## Prepare the project To open and prepare the sample project:
-1. Open the *azure-iot-sdk-csharp\iot-hub\Samples\device\PnpDeviceSamples\Thermostat\Thermostat.csproj* project file in Visual Studio 2019.
+1. Open the *azure-iot-sdk-csharp\iothub\device\samples\solutions\PnpDeviceSamples\Thermostat\Thermostat.csproj* project file in Visual Studio 2019.
1. In Visual Studio, navigate to **Project > Thermostat Properties > Debug**. Then add the following environment variables to the project:
To open and prepare the sample project:
| IOTHUB_DEVICE_SECURITY_TYPE | connectionString | | IOTHUB_MODULE_CONNECTION_STRING | The module connection string you made a note of previously |
- To learn more about the sample configuration, see the [sample readme](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/readme.md).
+ To learn more about the sample configuration, see the [sample readme](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/solutions/PnpDeviceSamples#readme).
## Modify the code
iot-develop Tutorial Multiple Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-multiple-components.md
zone_pivot_groups: programming-languages-set-twenty-six
:::zone pivot="programming-language-ansi-c" :::zone-end
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
This tutorial also assumes that the device update takes place in a secure enviro
This tutorial is oriented toward a Windows-based workstation. However, you can perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md). > [!NOTE]
-> The sample used in this tutorial is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-samples-csharp](https://github.com/Azure-Samples/azure-iot-samples-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this tutorial to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
+> The sample used in this tutorial is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples/How%20To/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-sdk-csharp](https://github.com/Azure/azure-iot-sdk-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this tutorial to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
## Prerequisites
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md
If the custom allocation policy webhook wishes to return some data to the device
This feature is available in C, C#, JAVA and Node.js client SDKs. To learn more about the Azure IoT SDKs available for IoT Hub and the IoT Hub Device Provisioning service, see [Microsoft Azure IoT SDKs]( https://github.com/Azure/azure-iot-sdks).
-[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
## IoT Edge support
iot-dps How To Verify Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-verify-certificates.md
Now, you need to sign the *Verification Code* with the private key associated wi
Microsoft provides tools and samples that can help you create a signed verification certificate: - The **Azure IoT Hub C SDK** provides PowerShell (Windows) and Bash (Linux) scripts to help you create CA and leaf certificates for development and to perform proof-of-possession using a verification code. You can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate. -- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service/GroupCertificateVerificationSample), which you can use to do proof-of-possession.
+- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/How%20To/GroupCertificateVerificationSample), which you can use to do proof-of-possession.
> [!IMPORTANT] > In addition to performing proof-of-possession, the PowerShell and Bash scripts cited previously also allow you to create root certificates, intermediate certificates, and leaf certificates that can be used to authenticate and provision devices. These certificates should be used for development only. They should never be used in a production environment.
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
The DPS device SDKs provide implementations of the [Register](/rest/api/iot-dps/
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-ansi-c&tabs=windows)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) | | Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=windows)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) | | Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-nodejs&tabs=windows)|[Reference](/javascript/api/azure-iot-provisioning-device) |
The DPS service SDKs help you build backend applications to manage enrollments a
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=symmetrickey)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.service) | | Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-service)|[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-nodejs&tabs=symmetrickey)|[Reference](/javascript/api/azure-iot-provisioning-service) |
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
In this section, you'll prepare a development environment that's used to build t
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
```cmd
- git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ::: zone-end
To update and run the provisioning sample with your device information:
:::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Extract Device Provisioning Service endpoint information":::
-3. Open a command prompt and go to the *SymmetricKeySample* in the cloned samples repository:
+3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository:
```cmd
- cd azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample\'
``` 4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
To update and run the provisioning sample with your device information:
7. You should now see something similar to the following output. A "TestMessage" string is sent to the hub as a test message. ```output
- D:\azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+ D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
Initializing the device provisioning client... Initialized for registration Id symm-key-csharp-device-01.
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll prepare a development environment used to build the [Azu
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
```cmd
- git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ::: zone-end
In this section, you'll build and execute a sample that reads the endorsement ke
1. In a command prompt, change directories to the project directory for the TPM device provisioning sample. ```cmd
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\TpmSample\'
``` 2. Type the following command to build and run the TPM device provisioning sample. Copy the endorsement key returned from your TPM 2.0 hardware security module to use later when enrolling your device.
In this section, you'll configure sample code to use the [Advanced Message Queui
3. In a command prompt, change directories to the project directory for the TPM device provisioning sample. ```cmd
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\TpmSample\'
``` 4. Run the following command to register your device. Replace `<IdScope>` with the value for the DPS you just copied and `<RegistrationId>` with the value you used when creating the device enrollment.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-csharp"
-1. In your Windows command prompt, clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
```cmd
- git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ::: zone-end
The C# sample code is set up to use X.509 certificates that are stored in a pass
1. Copy the PKCS12 formatted certificate file to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the sample repo. ```bash
- cp certificate.pfx ./azure-iot-samples-csharp/provisioning/Samples/device/X509Sample
+ cp certificate.pfx ./azure-iot-sdk-csharp/provisioning/device/samples/Getting Started/X509Sample
``` You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
If you plan to explore the Azure IoT Hub Device Provisioning Service tutorials,
The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) has scripts that can help you create root CA, intermediate CA, and device certificates, and do proof-of-possession with the service to verify root and intermediate CA certificates. To learn more, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
-The [Group certificate verification sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/service/GroupCertificateVerificationSample) in the [Azure IoT Samples for C# (.NET)](https://github.com/Azure-Samples/azure-iot-samples-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
+The [Group certificate verification sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/How%20To/GroupCertificateVerificationSample) in the [Azure IoT SDK for C# (.NET)](https://github.com/Azure/azure-iot-sdk-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
:::zone-end
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
Since the IP address of an IoT hub can change without notice, always use the FQD
Some of these firewall rules are inherited from Azure Container Registry. For more information, see [Configure rules to access an Azure container registry behind a firewall](../container-registry/container-registry-firewall-access-rules.md).
-You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](/azure/container-registry/container-registry-firewall-access-rules#enable-dedicated-data-endpoints).
+You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](../container-registry/container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints).
> [!NOTE] > To provide a consistent FQDN between the REST and data endpoints, beginning **June 15, 2020** the Microsoft Container Registry data endpoint will change from `*.cdn.mscr.io` to `*.data.mcr.microsoft.com`
These constraints can be applied to individual modules by using create options i
## Next steps * Learn more about [IoT Edge automatic deployment](module-deployment-monitoring.md).
-* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
+* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Once the run completes, you can register the model that was created from the bes
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
-
+CLI example not available, please use Python SDK.
``` # [Python SDK](#tab/python)
For a detailed description on task specific hyperparameters, please refer to [Hy
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](./how-to-use-automl-small-object-detect.md). -
+### Test the deployment
+Please check this [Test the deployment](./tutorial-auto-train-image-models.md#test-the-deployment) section to test the deployment and visualize the detections from the model.
## Example notebooks
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
The following snippet creates the autoscale profile:
> [!NOTE] > For more, see the [reference page for autoscale](/cli/azure/monitor/autoscale) +
+# [Python](#tab/python)
+
+Import modules:
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.monitor import MonitorManagementClient
+from azure.mgmt.monitor.models import AutoscaleProfile, ScaleRule, MetricTrigger, ScaleAction, Recurrence, RecurrentSchedule
+import random
+import datetime
+```
+
+Define variables for the workspace, endpoint, and deployment:
+
+```python
+subscription_id = "<YOUR-SUBSCRIPTION-ID>"
+resource_group = "<YOUR-RESOURCE-GROUP>"
+workspace = "<YOUR-WORKSPACE>"
+
+endpoint_name = "<YOUR-ENDPOINT-NAME>"
+deployment_name = "blue"
+```
+
+Get Azure ML and Azure Monitor clients:
+
+```python
+credential = DefaultAzureCredential()
+ml_client = MLClient(
+ credential, subscription_id, resource_group, workspace
+)
+
+mon_client = MonitorManagementClient(
+ credential, subscription_id
+)
+```
+
+Get the endpoint and deployment objects:
+
+```python
+deployment = ml_client.online_deployments.get(
+ deployment_name, endpoint_name
+)
+
+endpoint = ml_client.online_endpoints.get(
+ endpoint_name
+)
+```
+
+Create an autoscale profile:
+
+```python
+# Set a unique name for autoscale settings for this deployment. The below will append a random number to make the name unique.
+autoscale_settings_name = f"autoscale-{endpoint_name}-{deployment_name}-{random.randint(0,1000)}"
+
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = []
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) In [Azure Machine Learning studio](https://ml.azure.com), select your workspace and then select __Endpoints__ from the left side of the page. Once the endpoints are listed, select the one you want to configure.
The rule is part of the `my-scale-settings` profile (`autoscale-name` matches th
> [!NOTE] > For more information on the CLI syntax, see [`az monitor autoscale`](/cli/azure/monitor/autoscale). +
+# [Python](#tab/python)
+
+Create the rule definition:
+
+```python
+rule_scale_out = ScaleRule(
+ metric_trigger = MetricTrigger(
+ metric_name="CpuUtilizationPercentage",
+ metric_resource_uri = deployment.id,
+ time_grain = datetime.timedelta(minutes = 1),
+ statistic = "Average",
+ operator = "GreaterThan",
+ time_aggregation = "Last",
+ time_window = datetime.timedelta(minutes = 5),
+ threshold = 70
+ ),
+ scale_action = ScaleAction(
+ direction = "Increase",
+ type = "ChangeCount",
+ value = 2,
+ cooldown = datetime.timedelta(hours = 1)
+ )
+)
+```
+This rule is refers to the last 5 minute average of `CPUUtilizationpercentage` from the arguments `metric_name`, `time_window` and `time_aggregation`. When value of the metric is greater than the `threshold` of 70, two more VM instances are allocated.
+
+Update the `my-scale-settings` profile to include this rule:
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = [
+ rule_scale_out
+ ]
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) In the __Rules__ section, select __Add a rule__. The __Scale rule__ page is displayed. Use the following information to populate the fields on this page:
When load is light, a scaling in rule can reduce the number of VM instances. The
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_in_on_cpu_util" :::
+# [Python](#tab/python)
+
+Create the rule definition:
+
+```python
+rule_scale_in = ScaleRule(
+ metric_trigger = MetricTrigger(
+ metric_name="CpuUtilizationPercentage",
+ metric_resource_uri = deployment.id,
+ time_grain = datetime.timedelta(minutes = 1),
+ statistic = "Average",
+ operator = "GreaterThan",
+ time_aggregation = "Last",
+ time_window = datetime.timedelta(minutes = 5),
+ threshold = 70
+ ),
+ scale_action = ScaleAction(
+ direction = "Increase",
+ type = "ChangeCount",
+ value = 2,
+ cooldown = datetime.timedelta(hours = 1)
+ )
+)
+```
+
+Update the `my-scale-settings` profile to include this rule:
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = [
+ rule_scale_out,
+ rule_scale_in
+ ]
+ )
+ ]
+ }
+)
+```
+
# [Studio](#tab/azure-studio) In the __Rules__ section, select __Add a rule__. The __Scale rule__ page is displayed. Use the following information to populate the fields on this page:
The previous rules applied to the deployment. Now, add a rule that applies to th
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_up_on_request_latency" ::: +
+# [Python](#tab/python)
+
+Create the rule definition:
+
+```python
+rule_scale_out_endpoint = ScaleRule(
+ metric_trigger = MetricTrigger(
+ metric_name="RequestLatency",
+ metric_resource_uri = endpoint.id,
+ time_grain = datetime.timedelta(minutes = 1),
+ statistic = "Average",
+ operator = "GreaterThan",
+ time_aggregation = "Last",
+ time_window = datetime.timedelta(minutes = 5),
+ threshold = 70
+ ),
+ scale_action = ScaleAction(
+ direction = "Increase",
+ type = "ChangeCount",
+ value = 1,
+ cooldown = datetime.timedelta(hours = 1)
+ )
+)
+
+```
+This rule's `metric_resource_uri` field now refers to the endpoint rather than the deployment.
+
+Update the `my-scale-settings` profile to include this rule:
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = [
+ rule_scale_out,
+ rule_scale_in,
+ rule_scale_out_endpoint
+ ]
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) From the bottom of the page, select __+ Add a scale condition__.
You can also create rules that apply only on certain days or at certain times. I
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="weekend_profile" :::
+# [Python](#tab/python)
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="Default",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 2,
+ "default" : 2
+ },
+ recurrence = Recurrence(
+ frequency = "Week",
+ schedule = RecurrentSchedule(
+ time_zone = "Pacific Standard Time",
+ days = ["Saturday", "Sunday"],
+ hours = [],
+ minutes = []
+ )
+ )
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) From the bottom of the page, select __+ Add a scale condition__. On the new scale condition, use the following information to populate the fields:
From the bottom of the page, select __+ Add a scale condition__. On the new scal
If you are not going to use your deployments, delete them:
+# [Azure CLI](#tab/azure-cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="delete_endpoint" :::
+# [Python](#tab/python)
+
+```python
+mon_client.autoscale_settings.delete(
+ resource_group,
+ autoscale_settings_name
+)
+
+ml_client.online_endpoints.begin_delete(endpoint_name)
+```
+
+# [Studio](#tab/azure-studio)
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Endpoints** page.
+1. Select an endpoint by checking the circle next to the model name.
+1. Select **Delete**.
+
+Alternatively, you can delete a managed online endpoint directly in the [endpoint details page](how-to-use-managed-online-endpoint-studio.md#view-managed-online-endpoints).
+
+
+ ## Next steps To learn more about autoscale with Azure Monitor, see the following articles:
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ Learn how to use the Visual Studio Code (VS Code) debugger to test and debug online endpoints locally before deploying them to Azure. Azure Machine Learning local endpoints help you test and debug your scoring script, environment configuration, code configuration, and machine learning model locally.
The following table provides an overview of scenarios to help you choose what wo
## Prerequisites
+# [Azure CLI](#tab/cli)
+ This guide assumes you have the following items installed locally on your PC. - [Docker](https://docs.docker.com/engine/install/)
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
+# [Python](#tab/python)
+
+This guide assumes you have the following items installed locally on your PC.
+
+- [Docker](https://docs.docker.com/engine/install/)
+- [VS Code](https://code.visualstudio.com/#alt-downloads)
+- [Azure CLI](/cli/azure/install-azure-cli)
+- [Azure CLI `ml` extension (v2)](how-to-configure-cli.md)
+- [Azure ML Python SDK (v2)](https://aka.ms/sdk-v2-install)
+
+For more information, see the guide on [how to prepare your system to deploy managed online endpoints](how-to-deploy-managed-online-endpoints.md#prepare-your-system).
+
+The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples
+cd sdk/endpoints/online/managed
+```
+
+Import the required modules:
+
+```python
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ CodeConfiguration,
+ Environment,
+)
+from azure.identity import DefaultAzureCredential, AzureCliCredential
+```
+
+Set up variables for the workspace and endpoint:
+
+```python
+subscription_id = "<SUBSCRIPTION_ID>"
+resource_group = "<RESOURCE_GROUP>"
+workspace_name = "<AML_WORKSPACE_NAME>"
+
+endpoint_name = "<ENDPOINT_NAME>"
+```
+
+
+ ## Launch development container
+# [Azure CLI](#tab/cli)
+ Azure Machine Learning local endpoints use Docker and VS Code development containers (dev container) to build and configure a local debugging environment. With dev containers, you can take advantage of VS Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container). To debug online endpoints locally in VS Code, use the `--vscode-debug` flag when creating or updating and Azure Machine Learning online deployment. The following command uses a deployment example from the examples repo:
You'll use a few VS Code extensions to debug your deployments in the dev contain
> [!IMPORTANT] > Before starting your debug session, make sure that the VS Code extensions have finished installing in your dev container. +
+# [Python](#tab/python)
+
+Azure Machine Learning local endpoints use Docker and VS Code development containers (dev container) to build and configure a local debugging environment. With dev containers, you can take advantage of VS Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container).
+
+Get a handle to the workspace:
+
+```python
+credential = AzureCliCredential()
+ml_client = MLClient(
+ credential,
+ subscription_id=subscription_id,
+ resource_group_name=resource_group,
+ workspace_name=workspace_name,
+)
+```
+
+To debug online endpoints locally in VS Code, set the `vscode-debug` and `local` flags when creating or updating an Azure Machine Learning online deployment. The following code mirrors a deployment example from the examples repo:
+
+```python
+deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=endpoint_name,
+ model=Model(path="../model-1/model"),
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ environment=Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+)
+
+deployment = ml_client.online_deployments.begin_create_or_update(
+ deployment,
+ local=True,
+ vscode_debug=True,
+)
+```
+
+> [!IMPORTANT]
+> On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the VS Code executable or use WSL interop. For more information, see [Windows interoperability with Linux](/windows/wsl/interop).
+
+A Docker image is built locally. Any environment configuration or model file errors are surfaced at this stage of the process.
+
+> [!NOTE]
+> The first time you launch a new or updated dev container it can take several minutes.
+
+Once the image successfully builds, your dev container opens in a VS Code window.
+
+You'll use a few VS Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
+
+- Inference Debug
+- [Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
+- [Jupyter](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter)
+- [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
+
+> [!IMPORTANT]
+> Before starting your debug session, make sure that the VS Code extensions have finished installing in your dev container.
+++++++ ## Start debug session Once your environment is set up, use the VS Code debugger to test and debug your deployment locally.
For more information on the VS Code debugger, see [Debugging in VS Code](https:/
## Debug your endpoint
+# [Azure CLI](#tab/cli)
+ Now that your application is running in the debugger, try making a prediction to debug your scoring script. Use the `ml` extension `invoke` command to make a request to your local endpoint.
In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples f
At this point, any breakpoints in your `run` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions). +
+# [Python](#tab/python)
+
+Now that your application is running in the debugger, try making a prediction to debug your scoring script.
+
+Use the `invoke` method on your `MLClient` object to make a request to your local endpoint.
+
+```python
+endpoint = ml_client.online_endpoints.get(name=endpoint_name, local=True)
+
+request_file_path = "../model-1/sample-request.json"
+
+endpoint.invoke(endpoint_name, request_file_path, local=True)
+```
+
+In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples for the model to make predictions on similar to the following JSON:
+
+```json
+{"data": [
+ [1,2,3,4,5,6,7,8,9,10],
+ [10,9,8,7,6,5,4,3,2,1]
+]}
+```
+
+> [!TIP]
+> The scoring URI is the address where your endpoint listens for requests. The `as_dict` method of endpoint objects returns information similar to `show` in the Azure CLI. The endpoint object can be obtained through `.get`.
+>
+> ```python
+> endpoint = ml_client.online_endpoints.get(endpoint_name, local=True)
+> endpoint.as_dict()
+> ```
+>
+> The output should look similar to the following:
+>
+> ```json
+> {
+> "auth_mode": "aml_token",
+> "location": "local",
+> "name": "my-new-endpoint",
+> "properties": {},
+> "provisioning_state": "Succeeded",
+> "scoring_uri": "http://localhost:5001/score",
+> "tags": {},
+> "traffic": {},
+> "type": "online"
+>}
+>```
+>
+>The scoring URI can be found in the `scoring_uri` key.
+
+At this point, any breakpoints in your `run` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions).
++
+
++ ## Edit your endpoint
+# [Azure CLI](#tab/cli)
+ As you debug and troubleshoot your application, there are scenarios where you need to update your scoring script and configurations. To apply changes to your code:
az ml online-deployment update --file <DEPLOYMENT-YAML-SPECIFICATION-FILE> --loc
Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
+# [Python](#tab/python)
+
+As you debug and troubleshoot your application, there are scenarios where you need to update your scoring script and configurations.
+
+To apply changes to your code:
+
+1. Update your code
+1. Restart your debug session using the `Developer: Reload Window` command in the command palette. For more information, see the [command palette documentation](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette).
+
+> [!NOTE]
+> Since the directory containing your code and endpoint assets is mounted onto the dev container, any changes you make in the dev container are synced with your local file system.
+
+For more extensive changes involving updates to your environment and endpoint configuration, use your `MLClient`'s `online_deployments.update` module/method. Doing so will trigger a full image rebuild with your changes.
+
+```python
+new_deployment = ManagedOnlineDeployment(
+ name="green",
+ endpoint_name=endpoint_name,
+ model=Model(path="../model-2/model"),
+ code_configuration=CodeConfiguration(
+ code="../model-2/onlinescoring", scoring_script="score.py"
+ ),
+ environment=Environment(
+ conda_file="../model-2/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=2,
+)
+
+ml_client.online_deployments.update(new_deployment, local=True, vscode_debug=True)
+```
+
+Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
+++
+
+ ## Next steps - [Deploy and score a machine learning model by using a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
Previously updated : 10/21/2021 Last updated : 09/13/2022 # Use GitHub Actions with Azure Machine Learning- Get started with [GitHub Actions](https://docs.github.com/en/actions) to train a model on Azure Machine Learning.
-> [!NOTE]
-> GitHub Actions for Azure Machine Learning are provided as-is, and are not fully supported by Microsoft. If you encounter problems with a specific action, open an issue in the repository for the action. For example, if you encounter a problem with the aml-deploy action, report the problem in the [https://github.com/Azure/aml-deploy](https://github.com/Azure/aml-deploy) repo.
+This article will teach you how to create a GitHub Actions workflow that builds and deploys a machine learning model to [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning). You'll train a [scikit-learn](https://scikit-learn.org/) linear regression model on the NYC Taxi dataset.
-## Prerequisites
+GitHub Actions uses a workflow YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
-## Workflow file overview
+## Prerequisites
-A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
-The file has four sections:
+* A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
-|Section |Tasks |
-|||
-|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
-|**Connect** | 1. Connect to the machine learning workspace. <br /> 2. Connect to a compute target. |
-|**Job** | 1. Submit a training job. |
-|**Deploy** | 1. Register model in Azure Machine Learning registry. 1. Deploy the model. |
+## Step 1. Get the code
-## Create repository
+Fork the following repo at GitHub:
-Create a new repository off the [ML Ops with GitHub Actions and Azure Machine Learning template](https://github.com/machine-learning-apps/ml-template-azure).
+```
+https://github.com/azure/azureml-examples
+```
-1. Open the [template](https://github.com/machine-learning-apps/ml-template-azure) on GitHub.
-2. Select **Use this template**.
+## Step 2. Authenticate with Azure
- :::image type="content" source="media/how-to-github-actions-machine-learning/gh-actions-use-template.png" alt-text="Select use this template":::
-3. Create a new repository from the template. Set the repository name to `ml-learning` or a name of your choice.
+You'll need to first define how to authenticate with Azure. You can use a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or [OpenID Connect](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect).
+### Generate deployment credentials
-## Generate deployment credentials
+# [Service principal](#tab/userlevel)
-You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+Create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
```azurecli-interactive az ad sp create-for-rbac --name "myML" --role contributor \
In the example above, replace the placeholders with your subscription ID, resour
} ```
-## Configure the GitHub secret
+# [OpenID Connect](#tab/openid)
-1. In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-2. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+1. If you don't have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-## Connect to the workspace
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
-Use the **Azure Machine Learning Workspace action** to connect to your Azure Machine Learning workspace.
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
-```yaml
- - name: Connect/Create Azure Machine Learning Workspace
- id: aml_workspace
- uses: Azure/aml-workspace@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-```
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-By default, the action expects a `workspace.json` file. If your JSON file has a different name, you can specify it with the `parameters_file` input parameter. If there is not a file, a new one will be created with the repository name.
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
-```yaml
- - name: Connect/Create Azure Machine Learning Workspace
- id: aml_workspace
- uses: Azure/aml-workspace@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- parameters_file: "alternate_workspace.json"
-```
-The action writes the workspace Azure Resource Manager (ARM) properties to a config file, which will be picked by all future Azure Machine Learning GitHub Actions. The file is saved to `GITHUB_WORKSPACE/aml_arm_config.json`.
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
-## Connect to a Compute Target in Azure Machine Learning
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
-Use the [Azure Machine Learning Compute action](https://github.com/Azure/aml-compute) to connect to a compute target in Azure Machine Learning. If the compute target exists, the action will connect to it. Otherwise the action will create a new compute target. The [AML Compute action](https://github.com/Azure/aml-compute) only supports the Azure ML compute cluster and Azure Kubernetes Service (AKS).
+ ```azurecli-interactive
+ az role assignment create --role contributor --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ ```
-```yaml
- - name: Connect/Create Azure Machine Learning Compute Target
- id: aml_compute_training
- uses: Azure/aml-compute@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-```
-## Submit training job
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-Use the [Azure Machine Learning Training action](https://github.com/Azure/aml-run) to submit a ScriptRun, an Estimator or a Pipeline to Azure Machine Learning.
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-```yaml
- - name: Submit training run
- id: aml_run
- uses: Azure/aml-run@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-```
+
-## Register model in registry
+### Create secrets
-Use the [Azure Machine Learning Register Model action](https://github.com/Azure/aml-registermodel) to register a model to Azure Machine Learning.
+# [Service principal](#tab/userlevel)
-```yaml
- - name: Register model
- id: aml_registermodel
- uses: Azure/aml-registermodel@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- run_id: ${{ steps.aml_run.outputs.run_id }}
- experiment_name: ${{ steps.aml_run.outputs.experiment_name }}
-```
+1. In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Actions**. Select **New repository secret**.
-## Deploy model to Azure Machine Learning to ACI
+2. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZ_CREDS`.
-Use the [Azure Machine Learning Deploy action](https://github.com/Azure/aml-deploy) to deploys a model and create an endpoint for the model. You can also use the Azure Machine Learning Deploy to deploy to Azure Kubernetes Service. See [this sample workflow](https://github.com/Azure-Samples/mlops-enterprise-template) for a model that deploys to Azure Kubernetes Service.
+ # [OpenID Connect](#tab/openid)
-```yaml
- - name: Deploy model
- id: aml_deploy
- uses: Azure/aml-deploy@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- model_name: ${{ steps.aml_registermodel.outputs.model_name }}
- model_version: ${{ steps.aml_registermodel.outputs.model_version }}
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-```
+1. In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Actions**. Select **New repository secret**.
-## Complete example
-
-Train your model and deploy to Azure Machine Learning.
-
-```yaml
-# Actions train a model on Azure Machine Learning
-name: Azure Machine Learning training and deployment
-on:
- push:
- branches:
- - master
- # paths:
- # - 'code/*'
-jobs:
- train:
- runs-on: ubuntu-latest
- steps:
- # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- - name: Check Out Repository
- id: checkout_repository
- uses: actions/checkout@v2
-
- # Connect or Create the Azure Machine Learning Workspace
- - name: Connect/Create Azure Machine Learning Workspace
- id: aml_workspace
- uses: Azure/aml-workspace@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-
- # Connect or Create a Compute Target in Azure Machine Learning
- - name: Connect/Create Azure Machine Learning Compute Target
- id: aml_compute_training
- uses: Azure/aml-compute@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
++++
+## Step 3. Update `setup.sh` to connect to your Azure Machine Learning workspace
+
+You'll need to update the CLI setup file variables to match your workspace.
+
+1. In your cloned repository, go to `azureml-examples/cli/`.
+1. Edit `setup.sh` and update these variables in the file.
+
+ |Variable | Description |
+ |||
+ |GROUP | Name of resource group |
+ |LOCATION | Location of your workspace (example: `eastus2`) |
+ |WORKSPACE | Name of Azure ML workspace |
+
+## Step 4. Update `pipeline.yml` with your compute cluster name
+
+You'll use a `pipeline.yml` file to deploy your Azure ML pipeline. This is a machine learning pipeline and not a DevOps pipeline. You only need to make this update if you're using a name other than `cpu-cluster` for your computer cluster name.
+
+1. In your cloned repository, go to `azureml-examples/cli/jobs/pipelines/nyc-taxi/pipeline.yml`.
+1. Each time you see `compute: azureml:cpu-cluster`, update the value of `cpu-cluster` with your compute cluster name. For example, if your cluster is named `my-cluster`, your new value would be `azureml:my-cluster`. There are five updates.
+
+## Step 5: Run your GitHub Actions workflow
+
+Your workflow authenticates with Azure, sets up the Azure Machine Learning CLI, and uses the CLI to train a model in Azure Machine Learning.
+
+# [Service principal](#tab/userlevel)
++
+Your workflow file is made up of a trigger section and jobs:
+- A trigger starts the workflow in the `on` section. The workflow runs by default on a cron schedule and when a pull request is made from matching branches and paths. Learn more about [events that trigger workflows](https://docs.github.com/actions/using-workflows/events-that-trigger-workflows).
+- In the jobs section of the workflow, you checkout code and log into Azure with your service principal secret.
+- The jobs section also includes a setup action that installs and sets up the [Machine Learning CLI (v2)](how-to-configure-cli.md). Once the CLI is installed, the run job action runs your Azure Machine Learning `pipeline.yml` file to train a model with NYC taxi data.
++
+### Enable your workflow
+
+1. In your cloned repository, open `.github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml` and verify that your workflow looks like this.
+
+ ```yaml
+ name: cli-jobs-pipelines-nyc-taxi-pipeline
+ on:
+ workflow_dispatch:
+ schedule:
+ - cron: "0 0/4 * * *"
+ pull_request:
+ branches:
+ - main
+ - sdk-preview
+ paths:
+ - cli/jobs/pipelines/nyc-taxi/**
+ - .github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml
+ - cli/run-pipeline-jobs.sh
+ - cli/setup.sh
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: check out repo
+ uses: actions/checkout@v2
+ - name: azure login
+ uses: azure/login@v1
+ with:
+ creds: ${{secrets.AZ_CREDS}}
+ - name: setup
+ run: bash setup.sh
+ working-directory: cli
+ continue-on-error: true
+ - name: run job
+ run: bash -x ../../../run-job.sh pipeline.yml
+ working-directory: cli/jobs/pipelines/nyc-taxi
+ ```
+
+1. Select **View runs**.
+1. Enable workflows by selecting **I understand my workflows, go ahead and enable them**.
+1. Select the **cli-jobs-pipelines-nyc-taxi-pipeline workflow** and choose to **Enable workflow**.
+ :::image type="content" source="media/how-to-github-actions-machine-learning/enable-github-actions-ml-workflow.png" alt-text="Screenshot of enable GitHub Actions workflow.":::
+1. Select **Run workflow** and choose the option to **Run workflow** now.
+ :::image type="content" source="media/how-to-github-actions-machine-learning/github-actions-run-workflow.png" alt-text="Screenshot of run GitHub Actions workflow.":::
+
+ # [OpenID Connect](#tab/openid)
+
+Your workflow file is made up of a trigger section and jobs:
+- A trigger starts the workflow in the `on` section. The workflow runs by default on a cron schedule and when a pull request is made from matching branches and paths. Learn more about [events that trigger workflows](https://docs.github.com/actions/using-workflows/events-that-trigger-workflows).
+- In the jobs section of the workflow, you checkout code and log into Azure with the Azure login action using OpenID Connect.
+- The jobs section also includes a setup action that installs and sets up the [Machine Learning CLI (v2)](how-to-configure-cli.md). Once the CLI is installed, the run job action runs your Azure Machine Learning `pipeline.yml` file to train a model with NYC taxi data.
+
+### Enable your workflow
+
+1. In your cloned repository, open `.github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml` and verify that your workflow looks like this.
+
+ ```yaml
+ name: cli-jobs-pipelines-nyc-taxi-pipeline
+ on:
+ workflow_dispatch:
+ schedule:
+ - cron: "0 0/4 * * *"
+ pull_request:
+ branches:
+ - main
+ - sdk-preview
+ paths:
+ - cli/jobs/pipelines/nyc-taxi/**
+ - .github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml
+ - cli/run-pipeline-jobs.sh
+ - cli/setup.sh
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: check out repo
+ uses: actions/checkout@v2
+ - name: azure login
+ uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ - name: setup
+ run: bash setup.sh
+ working-directory: cli
+ continue-on-error: true
+ - name: run job
+ run: bash -x ../../../run-job.sh pipeline.yml
+ working-directory: cli/jobs/pipelines/nyc-taxi
+ ```
- # Submit a training run to the Azure Machine Learning
- - name: Submit training run
- id: aml_run
- uses: Azure/aml-run@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-
- # Register model in Azure Machine Learning model registry
- - name: Register model
- id: aml_registermodel
- uses: Azure/aml-registermodel@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- run_id: ${{ steps.aml_run.outputs.run_id }}
- experiment_name: ${{ steps.aml_run.outputs.experiment_name }}
-
- # Deploy model in Azure Machine Learning to ACI
- - name: Deploy model
- id: aml_deploy
- uses: Azure/aml-deploy@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- model_name: ${{ steps.aml_registermodel.outputs.model_name }}
- model_version: ${{ steps.aml_registermodel.outputs.model_version }}
+1. Select **View runs**.
+1. Enable workflows by selecting **I understand my workflows, go ahead and enable them**.
+1. Select the **cli-jobs-pipelines-nyc-taxi-pipeline workflow** and choose to **Enable workflow**.
-```
+ :::image type="content" source="media/how-to-github-actions-machine-learning/enable-github-actions-ml-workflow.png" alt-text="Screenshot of enable GitHub Actions workflow.":::
+
+1. Select **Run workflow** and choose the option to **Run workflow** now.
+
+ :::image type="content" source="media/how-to-github-actions-machine-learning/github-actions-run-workflow.png" alt-text="Screenshot of run GitHub Actions workflow.":::
++
+## Step 6: Verify your workflow run
+
+1. Open your completed workflow run and verify that the build job ran successfully. You'll see a green checkmark next to the job.
+1. Open Azure Machine Learning studio and navigate to the **nyc-taxi-pipeline-example**. Verify that each part of your job (prep, transform, train, predict, score) completed and that you see a green checkmark.
+
+ :::image type="content" source="media/how-to-github-actions-machine-learning/github-actions-machine-learning-nyc-taxi-complete.png" alt-text="Screenshot of successful Machine Learning Studio run.":::
## Clean up resources
When your resource group and repository are no longer needed, clean up the resou
## Next steps > [!div class="nextstepaction"]
-> [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)
-> [Create and run machine learning pipelines with Azure Machine Learning SDK v1](v1/how-to-create-machine-learning-pipelines.md)
+> [Create production ML pipelines with Python SDK](tutorial-pipeline-python-sdk.md)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
description: Learn how to troubleshoot issues with environment image builds and
--++ Last updated 03/01/2022
-# Troubleshoot environment image builds
-Learn how to troubleshoot issues with Docker environment image builds and package installations.
+# Troubleshooting environment image builds using troubleshooting log error messages
-## Prerequisites
+In this article, learn how to troubleshoot common problems you may encounter with environment image builds.
-* An Azure subscription. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
-* The [Azure CLI](/cli/azure/install-azure-cli).
-* The [CLI extension for Azure Machine Learning](v1/reference-azure-machine-learning-cli.md).
-* To debug locally, you must have a working Docker installation on your local system.
+## Azure Machine Learning environments
-## Docker image build failures
-
-For most image build failures, you'll find the root cause in the image build log.
-Find the image build log from the Azure Machine Learning portal (20\_image\_build\_log.txt) or from your Azure Container Registry task job logs.
-
-It's usually easier to reproduce errors locally. Check the kind of error and try one of the following `setuptools`:
+Azure Machine Learning environments are an encapsulation of the environment where your machine learning training happens.
+They specify the base docker image, Python packages, and software settings around your training and scoring scripts.
+Environments are managed and versioned assets within your Machine Learning workspace that enable reproducible, auditable, and portable machine learning workflows across various compute targets.
-- Install a conda dependency locally: `conda install suspicious-dependency==X.Y.Z`.-- Install a pip dependency locally: `pip install suspicious-dependency==X.Y.Z`.-- Try to materialize the entire environment: `conda create -f conda-specification.yml`.
+## Types of environments
-> [!IMPORTANT]
-> Make sure that the platform and interpreter on your local compute cluster match the ones on the remote compute cluster.
-
-### Timeout
-
-The following network issues can cause timeout errors:
--- Low internet bandwidth-- Server issues-- Large dependencies that can't be downloaded with the given conda or pip timeout settings
-
-Messages similar to the following examples will indicate the issue:
-
-```
-('Connection broken: OSError("(104, \'ECONNRESET\')")', OSError("(104, 'ECONNRESET')"))
-```
-```
-ReadTimeoutError("HTTPSConnectionPool(host='****', port=443): Read timed out. (read timeout=15)",)
-```
-
-If you get an error message, try one of the following possible solutions:
-
-- Try a different source, such as mirrors, Azure Blob Storage, or other Python feeds, for the dependency.-- Update conda or pip. If you're using a custom Docker file, update the timeout settings.-- Some pip versions have known issues. Consider adding a specific version of pip to the environment dependencies.-
-### Package not found
-
-The following errors are most common for image build failures:
--- Conda package couldn't be found:-
- ```
- ResolvePackageNotFound:
- - not-existing-conda-package
- ```
--- Specified pip package or version couldn't be found:-
- ```
- ERROR: Could not find a version that satisfies the requirement invalid-pip-package (from versions: none)
- ERROR: No matching distribution found for invalid-pip-package
- ```
--- Bad nested pip dependency:-
- ```
- ERROR: No matching distribution found for bad-package==0.0 (from good-package==1.0)
- ```
-
-Check that the package exists on the specified sources. Use [pip search](https://pip.pypa.io/en/stable/reference/pip_search/) to verify pip dependencies:
--- `pip search azureml-core`-
-For conda dependencies, use [conda search](https://docs.conda.io/projects/conda/en/latest/commands/search.html):
--- `conda search conda-forge::numpy`-
-For more options, try:
-- `pip search -h`-- `conda search -h`-
-#### Installer notes
-
-Make sure that the required distribution exists for the specified platform and Python interpreter version.
-
-For pip dependencies, go to `https://pypi.org/project/[PROJECT NAME]/[VERSION]/#files` to see if the required version is available. Go to https://pypi.org/project/azureml-core/1.11.0/#files to see an example.
-
-For conda dependencies, check the package on the channel repository.
-For channels maintained by Anaconda, Inc., check the [Anaconda Packages page](https://repo.anaconda.com/pkgs/).
-
-### Pip package update
-
-During an installation or an update of a pip package, the resolver might need to update an already-installed package to satisfy the new requirements.
-Uninstallation can fail for various reasons related to the pip version or the way the dependency was installed.
-The most common scenario is that a dependency installed by conda couldn't be uninstalled by pip.
-For this scenario, consider uninstalling the dependency by using `conda remove mypackage`.
-
-```
- Attempting uninstall: mypackage
- Found existing installation: mypackage X.Y.Z
-ERROR: Cannot uninstall 'mypackage'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
-```
-### Installer issues
-
-Certain installer versions have issues in the package resolvers that can lead to a build failure.
-
-If you're using a custom base image or Dockerfile, we recommend using conda version 4.5.4 or later.
+Environments can broadly be divided into three categories: curated, user-managed, and system-managed.
-A pip package is required to install pip dependencies. If a version isn't specified in the environment, the latest version will be used.
-We recommend using a known version of pip to avoid transient issues or breaking changes that the latest version of the tool might cause.
+Curated environments are pre-created environments that are managed by Azure Machine Learning (AzureML) and are available by default in every workspace provisioned. ```
-Consider pinning the pip version in your environment if you see the following message:
+Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks.
+These pre-created environments also allow for faster deployment time.
- ```
- Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.
- ```
+In user-managed environments, you're responsible for setting up your environment and installing every package that your training script needs on the compute target.
+Also be sure to include any dependencies needed for model deployment.
+These types of environments are represented by two subtypes, BYOC (bring your own container) ΓÇô a Docker image user brings to AzureML and Docker build context based environment where AzureML materializes the image from the user provided content.
-Pip subprocess error:
- ```
- ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, update the hashes as well. Otherwise, examine the package contents carefully; someone may have tampered with them.
- ```
+You use system-managed environments when you want conda to manage the Python environment for you.
+A new isolated conda environment is materialized from your conda specification on top of a base Docker image. By default, common properties are added to the derived image.
+Note that environment isolation implies that Python dependencies installed in the base image won't be available in the derived image.
-Pip installation can be stuck in an infinite loop if there are unresolvable conflicts in the dependencies.
-If you're working locally, downgrade the pip version to < 20.3.
-In a conda environment created from a YAML file, you'll see this issue only if conda-forge is the highest-priority channel. To mitigate the issue, explicitly specify pip < 20.3 (!=20.3 or =20.2.4 pin to other version) as a conda dependency in the conda specification file.
+## Create and manage environments
-### ModuleNotFoundError: No module named 'distutils.dir_util'
+You can create and manage environments from clients like AzureML Python SDK, AzureML CLI, AzureML Studio UI, VS code extension.
-When setting up your environment, sometimes you'll run into the issue **ModuleNotFoundError: No module named 'distutils.dir_util'**. To fix it, run the following command:
+"Anonymous" environments are automatically registered in your workspace when you submit an experiment without registering or referencing an already existing environment.
+They won't be listed but may be retrieved by version or label.
-```bash
-apt-get install -y --no-install-recommends python3 python3-distutils && \
-ln -sf /usr/bin/python3 /usr/bin/python
-```
+AzureML builds environment definitions into Docker images.
+It also caches the environments in Azure Container Registry associated with your AzureML Workspace so they can be reused in subsequent training jobs and service endpoint deployments.
+Multiple environments with the same definition may result the same image, so the cached image will be reused.
+Running a training script remotely requires the creation of a Docker image.
-When working with a Dockerfile, run it as part of a RUN command.
+## Reproducibility and vulnerabilities
-```dockerfile
-RUN apt-get update && \
- apt-get install -y --no-install-recommends python3 python3-distutils && \
- ln -sf /usr/bin/python3 /usr/bin/python
+Over time vulnerabilities are discovered and Docker images that correspond to AzureML environments may be flagged by the scanning tools.
+Updates for AzureML based images are released regularly, with a commitment of no unpatched vulnerabilities older than 30 days in the latest version of the image.
+It's your responsibility to evaluate the threat and address vulnerabilities in environments.
+Not all the vulnerabilities are exploitable, so you need to use your judgment when choosing between reproducibility and resolving vulnerabilities.
+> [!IMPORTANT]
+> There's no guarantee that the same set of python dependencies will be materialized with an image rebuild or for a new environment with the same set of Python dependencies.
+
+## *Environment definition problems*
+
+### Environment name issues
+#### **"Curated prefix not allowed"**
+Terminology:
+
+"Curated": environments Microsoft creates and maintains.
+
+"Custom": environments you create and maintain.
+
+- The name of your custom environment uses terms reserved only for curated environments
+- Don't start your environment name with *Microsoft* or *AzureML*--these prefixes are reserved for curated environments
+- To customize a curated environment, you must clone and rename the environment
+- For more information about proper curated environment usage, see [create and manage reusable environments](https://aka.ms/azureml/environment/create-and-manage-reusable-environments)
+
+#### **"Environment name is too long"**
+- Environment names can be up to 255 characters in length
+- Consider renaming and shortening your environment name
+
+### Docker issues
+To create a new environment, you must use one of the following approaches:
+1. Base image
+ - Provide base image name, repository from which to pull it, credentials if needed
+ - Provide a conda specification
+2. Base Dockerfile (V1 only, Deprecated)
+ - Provide a Dockerfile
+ - Provide a conda specification
+3. Docker build context
+ - Provide the location of the build context (URL)
+ - The build context must contain at least a Dockerfile, but may contain other files as well
++
+#### **"Missing Docker definition"**
+- An environment has a `DockerSection` that must be populated with either a base image, base Dockerfile, or build context
+- This section configures settings related to the final Docker image built to the specifications of the environment and whether to use Docker containers to build the environment
+- See [DockerSection](https://aka.ms/azureml/environment/environment-docker-section)
+
+#### **"Missing Docker build context location"**
+- If you're specifying a Docker build context as part of your environment build, you must provide the path of the build context directory
+- See [BuildContext](https://aka.ms/azureml/environment/build-context-class)
+
+#### **"Too many Docker options"**
+Only one of the following options can be specified:
+
+*V1*
+- `base_image`
+- `base_dockerfile`
+- `build_context`
+- See [DockerSection](https://aka.ms/azureml/environment/docker-section-class)
+
+*V2*
+- `image`
+- `build`
+- See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
+
+#### **"Missing Docker option"**
+*V1*
+- You must specify one of: base image, base Dockerfile, or build context
+
+*V2:*
+- You must specify one of: image or build context
+
+#### **"Container registry credentials missing either username or password"**
+- To access the base image in the container registry specified, you must provide both a username and password. One is missing.
+- Note that providing credentials in this way is deprecated. For the current method of providing credentials, see the *secrets in base image registry* section.
+
+#### **"Multiple credentials for base image registry"**
+- When specifying credentials for a base image registry, you must specify only one set of credentials.
+- The following authentication types are currently supported:
+ - Basic (username/password)
+ - Registry identity (clientId/resourceId)
+- If you're using workspace connections to specify credentials, [delete one of the connections](https://aka.ms/azureml/environment/delete-connection-v1)
+- If you've specified credentials directly in your environment definition, choose either username/password or registry identity
+to use, and set the other credentials you won't use to `null`
+ - Specifying credentials in this way is deprecated. It's recommended that you use workspace connections. See
+ *secrets in base image registry* below
+
+#### **"Secrets in base image registry"**
+- If you specify a base image in your `DockerSection`, you must specify the registry address from which the image will be pulled,
+and credentials to authenticate to the registry, if needed.
+- Historically, credentials have been specified in the environment definition. However, this isn't secure and should be
+avoided.
+- Users should set credentials using workspace connections. For instructions on how to
+do this, see [set_connection](https://aka.ms/azureml/environment/set-connection-v1)
+
+#### **"Deprecated Docker attribute"**
+- The following `DockerSection` attributes are deprecated:
+ - `enabled`
+ - `arguments`
+ - `shared_volumes`
+ - `gpu_support`
+ - Azure Machine Learning now automatically detects and uses NVIDIA Docker extension when available.
+ - `smh_size`
+- Use [DockerConfiguration](https://aka.ms/azureml/environment/docker-configuration-class) instead
+- See [DockerSection deprecated variables](https://aka.ms/azureml/environment/docker-section-class)
+
+#### **"Dockerfile length over limit"**
+- The specified Dockerfile can't exceed the maximum Dockerfile size of 100KB
+- Consider shortening your Dockerfile to get it under this limit
+
+### Docker build context issues
+#### **"Missing Dockerfile path"**
+- In the Docker build context, a Dockerfile path must be specified
+- This is the path to the Dockerfile relative to the root of Docker build context directory
+- See [Build Context class](https://aka.ms/azureml/environment/build-context-class)
+
+#### **"Not allowed to specify attribute with Docker build context"**
+- If a Docker build context is specified, then the following items can't also be specified in the
+environment definition:
+ - Environment variables
+ - Conda dependencies
+ - R
+ - Spark
+
+#### **"Location type not supported/Unknown location type"**
+- The following are accepted location types:
+ - Git
+ - Git URLs can be provided to AzureML, but images can't yet be built using them. Use a storage
+ account until builds have Git support
+ - [How to use git repository as build context](https://aka.ms/azureml/environment/git-repo-as-build-context)
+ - Storage account
+
+#### **"Invalid location"**
+- The specified location of the Docker build context is invalid
+- If the build context is stored in a git repository, the path of the build context must be specified as a git URL
+- If the build context is stored in a storage account, the path of the build context must be specified as
+ - `https://storage-account.blob.core.windows.net/container/path/`
+
+### Base image issues
+#### **"Base image is deprecated"**
+- The following base images are deprecated:
+ - `azureml/base`
+ - `azureml/base-gpu`
+ - `azureml/base-lite`
+ - `azureml/intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04`
+ - `azureml/intelmpi2018.3-cuda9.0-cudnn7-ubuntu16.04`
+ - `azureml/intelmpi2018.3-ubuntu16.04`
+ - `azureml/o16n-base/python-slim`
+ - `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu16.04`
+ - `azureml/openmpi3.1.2-ubuntu16.04`
+ - `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04`
+ - `azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04`
+ - `azureml/openmpi3.1.2-cuda10.2-cudnn7-ubuntu18.04`
+ - `azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04`
+- AzureML can't provide troubleshooting support for failed builds with deprecated images.
+- Deprecated images are also at risk for vulnerabilities since they're no longer updated or maintained.
+It's best to use newer, non-deprecated versions.
+
+#### **"No tag or digest"**
+- For the environment to be reproducible, one of the following must be included on a provided base image:
+ - Version tag
+ - Digest
+- See [image with immutable identifier](https://aka.ms/azureml/environment/pull-image-by-digest)
+
+### Environment variable issues
+#### **"Misplaced runtime variables"**
+- An environment definition shouldn't contain runtime variables
+- Use the `environment_variables` attribute on the [RunConfiguration object](https://aka.ms/azureml/environment/environment-variables-on-run-config) instead
+
+### Python issues
+#### **"Python section missing"**
+*V1*
+
+- An environment definition must have a Python section
+- Conda dependencies are specified in this section, and Python (along with its version) should be one of them
+```json
+"python": {
+ "baseCondaEnvironment": null,
+ "condaDependencies": {
+ "channels": [
+ "anaconda",
+ "conda-forge"
+ ],
+ "dependencies": [
+ "python=3.6.2"
+ ],
+ },
+ "condaDependenciesFile": null,
+ "interpreterPath": "python",
+ "userManagedDependencies": false
+}
```
+- See [PythonSection class](https://aka.ms/azureml/environment/environment-python-section)
-Running this command installs the correct module dependencies to configure your environment.
-
-### Build failure when using Spark packages
-
-Configure the environment to not precache the packages.
+#### **"Python version missing"**
+- A Python version must be specified in the environment definition
+- A Python version can be added by adding Python as a conda package, specifying the version:
```python
-env.spark.precache_packages = False
-```
-
-## Service-side failures
-
-See the following scenarios to troubleshoot possible service-side failures.
-
-### You're unable to pull an image from a container registry, or the address couldn't be resolved for a container registry
-
-Possible issues:
-- The path name to the container registry might not be resolving correctly. Check that image names use double slashes and the direction of slashes on Linux versus Windows hosts is correct.-- If a container registry behind a virtual network is using a private endpoint in [an unsupported region](../private-link/private-link-overview.md#availability), configure the container registry by using the service endpoint (public access) from the portal and retry.-- After you put the container registry behind a virtual network, run the [Azure Resource Manager template](./how-to-network-security-overview.md) so the workspace can communicate with the container registry instance.-
-### You get a 401 error from a workspace container registry
-
-Resynchronize storage keys by using [ws.sync_keys()](/python/api/azureml-core/azureml.core.workspace.workspace#sync-keys--)